DEV Community

Cover image for Transferring Files of Any Size with Powershell
colin-williams-dev
colin-williams-dev

Posted on

Transferring Files of Any Size with Powershell

So, you have a directory that you want to share with your team. It's a big one. You think: "I'll compress this to a zip and email it" to avoid laborious hosting steps. You go to email your project when you are greeted with this:

Image description

😑😑😑

I got tired of seeing this so I wrote a couple Powershell scripts to break the files down into arbitrarily-sized buffers, from which we can transfer in however many emails required.

This workflow requires the SENDER and RECIPIENT to each run one of the scripts... (you'll need the directory you clone these scripts to be in your PATH if you don't want to reference them with their full paths...)

The steps are simple:

Sender - split-file.ps1

  1. Sender Runs:
    • split-file.ps1 -inFile "C:\path\to\your\file.zip" -buffSize 4MB
    • the out files will be named "1", "2", "3", ... to the current working directory
    • you can pass whatever -buffSize you need for your file (e.g. -buffSize 100KB)

Recipient - converge-files.ps1

  1. Recipient Runs:
    • .\converge-files.ps1 -outFile "reassembled_file.zip" -buffSize 4MB
    • this will search the current working directory, matching filenames that are incrementing digits
    • you should pass the same -buffSize argument as you did to split-file for best results (resembling original file)

Linked below is the repository that holds the two scripts and a helpful README with more thorough explanation. Open to cloning if you think this will be helpful for you! Feel free to make an issue or a PR if you think of ways to improve the scripts (I am a beginner with Powershell).

In case you don't want to download anything, here are the contents of the two scripts:

1. split-file.ps1



param(
  [string]$inFile,
  [int]$buffSize = 4MB
)

function split($inFile, [int] $buffSize){

  $stream = [System.IO.File]::OpenRead($inFile)
  $chunkNum = 1
  $barr = New-Object byte[] $buffSize

  while ($bytesRead = $stream.Read($barr, 0, $buffsize)){
    $outFile = "$chunkNum"
    $outStream = [System.IO.File]::OpenWrite($outFile)
    $outStream.Write($barr, 0, $bytesRead);
    $outStream.close();
    Write-Output "wrote chunk $outFile to file"
    $chunkNum += 1
  }
}

split @PSBoundParameters


Enter fullscreen mode Exit fullscreen mode

2. converge-files.ps1



param(
    [string]$outFile,
    [int]$buffSize = 4MB
)

function converge($outfile, [int] $buffSize) {

  $files = Get-ChildItem | Where-Object { $_.Name -match '^\d+$' } | Sort-Object { [int]$_.Name }

  $outStream = [System.IO.File]::OpenWrite($outFile)

  foreach ($file in $files) {
      $inStream = [System.IO.File]::OpenRead($file.FullName)
      $buffer = New-Object byte[] $buffSize
      $bytesRead = $inStream.Read($buffer, 0, $buffSize)
      $outStream.Write($buffer, 0, $bytesRead)
      $inStream.Close()
  }

  $outStream.Close()
  Write-Output "Re-assembled file written to $outFile"
}

converge @PSBoundParameters


Enter fullscreen mode Exit fullscreen mode

Note: "Splatting", or using the @ before the PSBoundParameters keyword will pass all the parameter arguments to the fn invocation in the script from how you invoked it in the CLI

How it works...

  • The split (first) script, opens a Read stream by reading the -inFile argument and breaks it into chunks of bytes. The bytes will be stored in a byte array ($barr) and will grow (Write) incrementally in chunks of bites determined by the -buffSize argument until the .Read of the stream returns zero which will terminate the while loop. The loop will output files named by the incrementing integer $chunkNum starting with 1 (which will later be captured as a digit file name in sequence).
  • The converge script, searches the CWD (where the split function outputs...) for filenames that are digits and sorts them numerically (to persist original order of byte chunks). It then uses a Write stream and a byte array to re-combine the chunks of bytes back into a single output file, resembling the original.

Voila! Your file is now duplicated and reconstructed. You will not even notice the difference. The purpose is the intermediary stage between the two scripts, where you can now send your data chunk files below any file transfer size limit and reconstruct them ("converge") when they arrive at their destination. 🏴🏴🏴

My public repository for cloning the scripts

  • Throw me a react on this blog and a star on the repo if this was helpful for you! <3

Top comments (0)