In general, if the 7z file is small, the second method is worth using, because it doesn't require you to install an ISO creation program. However, if the 7z file is large, I recommend you to use the first method, because uploading or downloading process in method 2 may take a long time.
I have a 500GB external drive full of data. I need to use this drive so, since I have 1TB of storage in OneDrive, I want to uploaded the content to free up the drive. I have a quite good connection too, so I figured it wouldn't take too long. The thing is that I can't find a way to upload this folder with the OneDrive Windows app.
Looking online, everyone proposed to move folders to my computer and then upload it, but that isn't an option since I don't have enough space in it. The other option is using the web app, but it is very unreliable and will stop working more often than not. Besides, if it stops, the upload has to start from the beginning.
You may be running into the 20GB OneDrive upload limit.Divide the files into manageable folders, and make compressed archives of those folders. You can file folders inside other folders. Use a visualizer like to identify which folders you need to break into sub-folders.
Imagine capturing an archive that contained files like "DivorceAttorneys.xls" or "\Contracts\2016\Panama-Gov\OffShoreAccounts.pdf". If you are uploading encrypted Zip files to Amazon or Azure, it's possible that file names and paths are being extracted and indexed, perhaps by Cortana in OneDrive, even though you intend to keep the contents of the Zip files 100% private. When you e-mail encrypted Zip files to others, or upload/download such files through proxy servers, it is also possible that the e-mail gateways and proxy servers are examining and logging the plaintext file names and paths in the otherwise-encrypted Zip files too. In some countries, just having suspicious file names could land you in jail.
For example, you could encrypt 500GB of your personal files using 7-Zip and an encryption passphrase stored in KeePass, then upload that archive to Amazon Glacier or Azure Cool Blob Storage for pennies per month. Because your data is encrypted locally, you don't have to trust Amazon or Microsoft. Because you're using PowerShell to automate the process, it can be done quickly and conveniently. And because the encryption passphrase is stored in KeePass, the passphrase does not need to be hard-coded into any plaintext scripts.
Imagine capturing an archive that contained files like "DivorceAttorneys.xls" or "\\Contracts\\2016\\Panama-Gov\\OffShoreAccounts.pdf". If you are uploading encrypted Zip files to Amazon or Azure, it\'s possible that file names and paths are being extracted and indexed, perhaps by Cortana in OneDrive, even though you intend to keep the contents of the Zip files 100% private. When you e-mail encrypted Zip files to others, or upload/download such files through proxy servers, it is also possible that the e-mail gateways and proxy servers are examining and logging the plaintext file names and paths in the otherwise-encrypted Zip files too. In some countries, just having suspicious file names could land you in jail.
For example, you could encrypt 500GB of your personal files using 7-Zip and an encryption passphrase stored in KeePass, then upload that archive to Amazon Glacier or Azure Cool Blob Storage for pennies per month. Because your data is encrypted locally, you don\'t have to trust Amazon or Microsoft. Because you\'re using PowerShell to automate the process, it can be done quickly and conveniently. And because the encryption passphrase is stored in KeePass, the passphrase does not need to be hard-coded into any plaintext scripts.
In Long:I use to split large archives. When I upload online those files, those are checksum-checked, so if I try to upload a backup ( simply renamed ) of those files, they are not added because they have the same checksum as the original files.
However, nothing is perfect, and sometimes, some parts I upload, when I try to access them later ( even from another pc - because this way is the best metod to transfer files between pc not concurrently connected, overwhelming email attachments ) can get corrupted, or the link is slow, or can happen server crashes.
As now, whenever I backup something, I make a 10% par2 redundancy, and as now this has been enough.However, for some archives, I want one or two full redundant backups.Achieve this with only par2, as now, is impraticable because rebuilds would take hours at 100% cpu, and this costs time and electricity.Actually, I am re-archiving files wich I want to backup, but this makes new archives which are incompatible with the former ones. So, if I can get only partially the former archive, and the new archive, there is no way I can combine them to gain original files. I have made 2 backups with double point of failure.Instead, if I am able to upload two/three interchangeable sets, I get only 1/2 or 1/3 of point of failure, because all the same files needs to get corrupted to make the archive unreadable, but at that point, I can use my 10% par2 redundancy to rebuild the missing part for all backups.
I thought about that long and hard and I would like to point out some facts:- that stuff bon-de-rado said. Why dont you just add some "panning", upload the files, and remove the panning before extracting?- uploading 3 same files is still (some) flooding. If you need 100 files, you will upload 300, if 10000 you will upload 30.000 and so on. It's still the same web-portal. It's not without reason that they do have this checksum-rejecter.- since you expect the same portal to not host your files correctly (having errors in 3 same files on different locations within that file) you should maybe think about changing that portal to some hoster who can be trusted. I have quite some cheap webspace and use (s-)ftp - never got any problems. Connection errors occur, but I can resume automatically on the last position.- breaking backwards-compatibility: if 7z sees a file and its not in the spec it will quit. Panned files are not in the spec. So you would break the compatiblity. Though: I really don't get why you just can't remove the panning before extracting. Can you clarify that?
If you are a good scripter you could maybe script this stuff.Also you should make a nice database of all your chunks on those hosters.A daily job should go through each hoster and alert you if some chunks cannot be reached anymore.An automated correction job should re-upload those missing junks using data from the other hosters.
-Trimming is not as simple in batching as panning - panning is just a "copy /b file1+file2 file_panned", however with some trick, it is still possible.But i thought that adding and EOF marker would not be so difficult if you already know how the split files are made - and just asking cost nothing-Actually the checksum is not to reject, but to substitute: let's say an online file gets corrupted or the server it is on has a crash: if youcan reup the file, all the references of the preceding just point to the new, so if you keep a database linking to all those files, no record areto be changed. So, we can say that the checksum is done in a way "client-wise".-I'm not talking about error when you upload: it's obvious that if you have this error, you can just be sure to resume or reset the upload and makeit complete. I'm talking when you try to access it later and you don't have the original files on your HD any more. It's not a matter of trustedor not hoster: as now, on thousands of files, I got only one error server-side. There is just some sensitive-data I want to backup for longer time soI want to be protected even in worst case- I don't understand what you say about backwards-compatibility: if you pan one-whole 7z file, because it is already wrapped and has the footer EOFmarker, it is still readable. The problem raises only on splitted archives, which have no EOF marker in each split. If you are referring to openingnew files with older version of 7zip, is like asking to open a LZMA2 7z with the 4.XX version, and this is not right. The rules is that archives madewith older 7z versions will always be readable by newer releases, but new archives have the requirement to be read only starting from the releasewith which they are made - and a EOF marker keeps this rule true.As I already said, I thought that just asking would not harm and can be a nice addition for who wants to keep files online like me, and trimmingis not as natural as panning. In best case, it needs double HD space, to accommodate new files, and time to generate them. Natural reading of panned files, would not need no new space no any more time.
I think you have the better explain what you mean with flooding. I pay for unlimited space, is it not my right to use it? If I place 3 GB online, who cares if they are all different files, or 3 times 1 GB of same files? Whatever check is made online is only for a reason - to allow the files to be referred always to the same link - think like if you shared a file to a your teamate, so instead to send him via email the whole file you just send him a link, than there is a server problem and file is unreachable or damaged, you just have to upload it again and original link you sent is the same. If links would have been always different, you would have to keep track of whonever you sent it and update them with the new link. In short, the checksum is computed to give file references long-term stability, if you can reup online the file if needed.
Save the file to the cloud If you need to send a file that's blocked by Outlook, the simplest way to send your file is to upload it to OneDrive or a secure network share server such as SharePoint. Then send a link to the file. If you need to receive a blocked file, ask the sender to upload the file to OneDrive or SharePoint and send you a link. Once you receive the link, you can open the file location and download the file. 041b061a72