Home of the Squeezebox™ & Transporter® network music players.
Page 9 of 11 FirstFirst ... 7891011 LastLast
Results 81 to 90 of 102
  1. #81
    Senior Member chill's Avatar
    Join Date
    Mar 2007
    Location
    Nottingham, UK
    Posts
    1,652
    Quote Originally Posted by Roland0 View Post
    Code:
    gzip -dc /mnt/MusicBackup/images/pCPLounge.img.gz > /dev/mmcblk0
    should do the trick
    Simple as that? I had no idea that you could archive and unarchive a whole disk that way. I assumed it dealt with just files, rather than a complete file system with a partition table.

  2. #82
    Senior Member
    Join Date
    Aug 2012
    Location
    Austria
    Posts
    1,002
    Quote Originally Posted by chill View Post
    Simple as that? I had no idea that you could archive and unarchive a whole disk that way. I assumed it dealt with just files, rather than a complete file system with a partition table.
    In Unix, everything is a file
    Various SW: Web Interface | Playlist Editor / Generator | Music Classification | Similar Music | Announce | EventTrigger | LMSlib2go | ...
    Various HowTos: build a self-contained LMS | Bluetooth/ALSA | Control LMS with any device | ...

  3. #83
    Senior Member chill's Avatar
    Join Date
    Mar 2007
    Location
    Nottingham, UK
    Posts
    1,652
    Quote Originally Posted by Roland0 View Post
    I have a lot to learn :-)

  4. #84
    Senior Member chill's Avatar
    Join Date
    Mar 2007
    Location
    Nottingham, UK
    Posts
    1,652
    So if /dev/mmcblk0 is a 16GB SD card, but I'm only using 1GB for the two partitions, the 'bs' and 'count' options in dd allow me to make an image that's only 1GB. If I use gzip to to make a compressed image of /dev/mmcblk0, all the empty space after 1GB will compress down to virtually nothing, but what will happen if I try to decompress that image onto an SD card that's smaller the 16GB? Is there a way, or a need, to limit the scope of the original image using something like the 'bs' and 'count' options?

  5. #85
    Senior Member
    Join Date
    May 2010
    Location
    London, UK
    Posts
    525
    Quote Originally Posted by chill View Post
    what will happen if I try to decompress that image onto an SD card that's smaller the 16GB?
    That was one reason why I decided to use a tar based approach. Perhaps not as straightforward to set up properly in the first place, or restore from. And I had little experience with dd, etc. But Iĺve only had to restore once in eight years.

  6. #86
    Senior Member Greg Erskine's Avatar
    Join Date
    Sep 2006
    Location
    Sydney, Australia
    Posts
    1,830
    hi chill,

    This might be worth a try. It is based on how we used to generate pCP images a few years ago. The image only takes a few seconds to generate.

    My first attempt generated from this image, resulted in a SD card appears to be working properly.

    Code:
    $ fdisk -l /dev/mmcblk0
    
    Disk /dev/mmcblk0: 3724 MB, 3904897024 bytes, 7626752 sectors
    119168 cylinders, 4 heads, 16 sectors/track
    Units: sectors of 1 * 512 = 512 bytes
    
    Device       Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type
    /dev/mmcblk0p1    128,0,1     127,3,16          8192      73727      65536 32.0M  c Win95 FAT32 (LBA)
    /dev/mmcblk0p2    1023,3,16   1023,3,16        73728     483327     409600  200M 83 Linux
                                                                               ===== 
                                                                                232
    $ dd if=/dev/mmcblk0 of=/tmp/backup.img bs=1M count=232
    I actually had 3 partitions on this SD card, and obviously partition 3 was not copied.

    regards
    Greg

  7. #87
    Senior Member chill's Avatar
    Join Date
    Mar 2007
    Location
    Nottingham, UK
    Posts
    1,652
    Quote Originally Posted by Greg Erskine View Post
    hi chill,

    This might be worth a try. It is based on how we used to generate pCP images a few years ago. The image only takes a few seconds to generate.

    My first attempt generated from this image, resulted in a SD card appears to be working properly.

    Code:
    $ fdisk -l /dev/mmcblk0
    
    Disk /dev/mmcblk0: 3724 MB, 3904897024 bytes, 7626752 sectors
    119168 cylinders, 4 heads, 16 sectors/track
    Units: sectors of 1 * 512 = 512 bytes
    
    Device       Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type
    /dev/mmcblk0p1    128,0,1     127,3,16          8192      73727      65536 32.0M  c Win95 FAT32 (LBA)
    /dev/mmcblk0p2    1023,3,16   1023,3,16        73728     483327     409600  200M 83 Linux
                                                                               ===== 
                                                                                232
    $ dd if=/dev/mmcblk0 of=/tmp/backup.img bs=1M count=232
    I actually had 3 partitions on this SD card, and obviously partition 3 was not copied.

    regards
    Greg
    Thanks Greg - that's the approach I went for yesterday. I scripted it so that it would work even after resizing partition 2. Here's my BackupSD.sh. It extracts the blocksize from the original SD card, so that I can add 1 block to the EndLBA of the second partition:
    Code:
    imagefile=$1
    bs=$(fdisk -l | grep -A 3 mmcblk0: | grep Units: | awk -F= {'print $2'} | awk {'print $1'})
    echo "blocksize="$bs
    count=$(fdisk -l | grep mmcblk0p2 | awk {'print $5+1'})
    echo "count="$count
    dd if=/dev/mmcblk0 bs=$bs count=$count of=$imagefile
    My SD card has grown to 1GB, and this takes just under a minute to create. I do a 'pcp bu' just before calling this, to make sure any changes I've made are included in the image.

    I'll need to investigate how to rsync it properly though. I ran the script for the first time overnight last night (with rsync options -rtvhiO), and it evidently only updated the target file's timetag. I think this is because the image will always be the same size, so rsync believes that the source and target files are identical apart from the timetags. I'll need to force it to update unless the contents are identical - presumably using the slower '-c' 'skip based on checksum' option.

  8. #88
    Senior Member chill's Avatar
    Join Date
    Mar 2007
    Location
    Nottingham, UK
    Posts
    1,652
    Quote Originally Posted by chill View Post
    I'll need to investigate how to rsync it properly though. I ran the script for the first time overnight last night (with rsync options -rtvhiO), and it evidently only updated the target file's timetag. I think this is because the image will always be the same size, so rsync believes that the source and target files are identical apart from the timetags. I'll need to force it to update unless the contents are identical - presumably using the slower '-c' 'skip based on checksum' option.
    Using the checksum option on two 1GB files (one local, one remote) is S L O O O W. The script runs at 3am, so I don't really care how long it takes, but since the checksum option requires the remote file to be read in its entirety by the host processor anyway, using this option to decide whether to skip the file saves no time. In fact it takes extra time because it not only has to copy the file, it also spends time calculating the checksums. I might as well just use cp instead of rsync, and copy over the current version of the image file every night.

  9. #89
    Senior Member chill's Avatar
    Join Date
    Mar 2007
    Location
    Nottingham, UK
    Posts
    1,652
    Quote Originally Posted by mrw View Post
    That was one reason why I decided to use a tar based approach. Perhaps not as straightforward to set up properly in the first place, or restore from. And I had little experience with dd, etc. But Iĺve only had to restore once in eight years.
    I've used tar occasionally, basically as a zip replacement, to bundle a selection of files. I had assumed that using such a tool, or any of the tools suggested by Roland0 earlier, to recover an SD card with more than one partition would require me to manually recreate the file system and partitions, and then recover the files into those partitions. I'm intrigued that the 'everything is a file' philosophy might mean that the file system and partition info can be included in the backup. dd is working for me, but I'm still interested in experimenting with these other tools.

  10. #90
    Senior Member chill's Avatar
    Join Date
    Mar 2007
    Location
    Nottingham, UK
    Posts
    1,652
    Quote Originally Posted by chill View Post
    I'll need to investigate how to rsync it properly though. I ran the script for the first time overnight last night (with rsync options -rtvhiO), and it evidently only updated the target file's timetag. I think this is because the image will always be the same size, so rsync believes that the source and target files are identical apart from the timetags. I'll need to force it to update unless the contents are identical - presumably using the slower '-c' 'skip based on checksum' option.
    I'm talking nonsense. The file was correctly updated using those rsync options. It uses file size *and* timetag to determine whether the file has changed, so each new version of the image file will be copied across, even though the file size is the same as the previous version. The -t option merely ensures that the source file's timetag is preserved in the target.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •