summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--ChangeLog99
-rw-r--r--VERSION2
-rw-r--r--debian/changelog14
-rw-r--r--debian/control2
-rw-r--r--doc/RELEASE_NOTES28
-rw-r--r--mic/chroot.py2
-rw-r--r--mic/imager/raw.py140
-rw-r--r--mic/kickstart/custom_commands/partition.py7
-rw-r--r--mic/utils/BmapCreate.py298
-rw-r--r--mic/utils/Fiemap.py252
-rw-r--r--mic/utils/fs_related.py29
-rw-r--r--mic/utils/gpt_parser.py325
-rw-r--r--mic/utils/misc.py12
-rw-r--r--mic/utils/partitionedfs.py31
-rw-r--r--packaging/mic.changes10
-rw-r--r--packaging/mic.dsc2
-rw-r--r--packaging/mic.spec6
17 files changed, 987 insertions, 272 deletions
diff --git a/ChangeLog b/ChangeLog
index d2827ae..2bc96d2 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,16 @@
-Release 0.18 - Mon Apr 03 2013
-===========================================================
+Release 0.19 - Thu May 16 2013 - Gui Chen <gui.chen@intel.com>
+=====================================================================
+ - new distribution support: Ubuntu 13.04 and openSUSE 12.3
+ - introduce '--part-type' to handle GPT partition
+ - copy bmap creation from bmap-tools
+ - update some depends and fix depends issue
+ - bug fix:
+ - fix bug autologinuser always set
+ - fix symlink bind mount left issue
+ - fix '/var/lock' non-existent throw traceback
+
+Release 0.18 - Mon Apr 03 2013 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* put build_id before image name for release option
* mount build directory as tmpfs to speed up
* enable --priority in ks to set priority
@@ -16,8 +27,8 @@ Release 0.18 - Mon Apr 03 2013
- clean up some bad indentations
- improve some error messages
-Release 0.17 - Tue Feb 28 2013
-===========================================================
+Release 0.17 - Tue Feb 28 2013 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* support new distribution Fedora 18
* enable to handle more than 3 partitions
* support partitions without mount point
@@ -32,14 +43,14 @@ Release 0.17 - Tue Feb 28 2013
- clean up some mess in utils/misc.py
- clean up pylint issue in creator.py
-Release 0.16.3 - Wed Feb 06 2013
-===========================================================
+Release 0.16.3 - Wed Feb 06 2013 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* fix no key 'HOME' in environ variable failure
* remove suffix when release specified
* roll back to original naming for release
-Release 0.16 - Wed Jan 30 2013
-===========================================================
+Release 0.16 - Wed Jan 30 2013 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* add GPT support for UEFI format
- add --ptable=gpt option in kickstart to enable GPT
- add simple GPT parser to parse PARTUUID
@@ -69,8 +80,8 @@ Release 0.16 - Wed Jan 30 2013
- refactor try except statement in baseimager
- fix existing loop images overwritten
-Release 0.15.3 - Wed Jan 23 2013
-===========================================================
+Release 0.15.3 - Wed Jan 23 2013 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* urgent bug fix:
- fix loop device not cleaned issue
- fix bootstrap dirs not unmounted issue
@@ -85,8 +96,8 @@ Release 0.15.3 - Wed Jan 23 2013
- clean up the mess 'directory not empty'
- fix type error when calling mknod
-Release 0.15 - Tue Dec 13 2012
-===========================================================
+Release 0.15 - Tue Dec 13 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* adapt new mechanism for bootstrap mode
- create 'mic-bootstrap-x86-arm' by obs build
- publish 'mic-bootstrap-x86-arm' into server repo
@@ -108,13 +119,13 @@ Release 0.15 - Tue Dec 13 2012
- fix traceback when failed to unmap kpartx device
- fix timestamp incorrect issue in logfile
-Release 0.14.2 - Wed Nov 14 2012
-===========================================================
+Release 0.14.2 - Wed Nov 14 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* support dracut for live image
* update bmap version to 1.1
-Release 0.14.1 - Fri Oct 15 2012
-===========================================================
+Release 0.14.1 - Fri Oct 15 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* support bmap file for ivi flashing tool
* just warning in chroot when not Tizen/MeeGo chroot dir
* fix logfile lost in bootstrap mode
@@ -123,8 +134,8 @@ Release 0.14.1 - Fri Oct 15 2012
- fix https proxy issue in yum backend
- avoid traceback when loop instance is NoneType
-Release 0.14 - Thu Aug 02 2012
-===========================================================
+Release 0.14 - Thu Aug 02 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* use cached metadata when checksum is not changed
* skip non-fatal error in ks file and prompt user to handle
* prompt user to handle when failed to apply img configure
@@ -136,8 +147,8 @@ Release 0.14 - Thu Aug 02 2012
- avoid traceback when converting unsupported type
- fix mic --version ugly output
-Release 0.13 - Wed Jul 12 2012
-===========================================================
+Release 0.13 - Wed Jul 12 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* create logfile as default when --release specifid
* use 'gzip' and 'bzip2' to pack image instead of python
* automatically detect path of 'env' for chroot
@@ -147,8 +158,8 @@ Release 0.13 - Wed Jul 12 2012
- fix unicode issue for logfile
- better fix for 'chroot raw' issue
-Release 0.12 - Wed Jun 20 2012
-===========================================================
+Release 0.12 - Wed Jun 20 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* use default value when @BUILD_ID@ and @ARCH@ not specified
* enhance proxy support in attachment retrieve
* add new --shrink opt for loop image to control img shrinking
@@ -160,8 +171,8 @@ Release 0.12 - Wed Jun 20 2012
- fix src pkgs download failed issue
- fix convert failed issue
-Release 0.11 - Fri Jun 08 2012
-===========================================================
+Release 0.11 - Fri Jun 08 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* support new subcmd 'auto' to handle magic line in ks
* enhance the handle of authentication url and https proxy
* support packing images together and support compressed file format
@@ -174,8 +185,8 @@ Release 0.11 - Fri Jun 08 2012
- fix attachment package url handling
- fix mic ch raw failed issue
-Release 0.10 - Tue May 15 2012
-===========================================================
+Release 0.10 - Tue May 15 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* container support using '%attachment' section in ks
* add --compress-to option to support zip format in loop image
* auto-detect config and plugindir to meet virtualenv and customized install
@@ -186,8 +197,8 @@ Release 0.10 - Tue May 15 2012
- some fixes to enhance authentication url
- refine repostr structure to fix comma issue in baseurl
-Release 0.9 - Fri Apr 13 2012
-===========================================================
+Release 0.9 - Fri Apr 13 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* support pre-install package with zypp backend
* sync /etc/mic/mic.conf to bootstrap
* enhance sorting for version comparsion in zypp
@@ -197,8 +208,8 @@ Release 0.9 - Fri Apr 13 2012
* fix liveusb parted mkpart failure, revert mbr size expand in raw
* cleanup /tmp/repolic* dir in the EULA checking
-Release 0.8 - Mon Mar 26 2012
-===========================================================
+Release 0.8 - Mon Mar 26 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* partition alignment support
* remove bootloader option 'quiet vga' for raw
* update dist files in git source
@@ -206,8 +217,8 @@ Release 0.8 - Mon Mar 26 2012
* add 40 system test case for help
* rewrite loop device allocation mechanism
-Release 0.7 - Fri Mar 02 2012
-===========================================================
+Release 0.7 - Fri Mar 02 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* zypp backend: fixed a fatal issue of unreleasable loop devs
* zypp backend: more friendly output message
* backend: share cached rpm files between yum and zypp
@@ -217,8 +228,8 @@ Release 0.7 - Fri Mar 02 2012
* fixed issues in openSUSE12.1
* new written man page
-Release 0.6 - Thu Feb 16 2012
-===========================================================
+Release 0.6 - Thu Feb 16 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* give hint when converted image existed
* conf.py: proxy scheme check
* space check before copy image
@@ -231,8 +242,8 @@ Release 0.6 - Thu Feb 16 2012
- catch creator error when retrieving bootstrap metadata
- correct matching .metadata file in bootstrap
-Release 0.5 - Mon Feb 06 2012
-===========================================================
+Release 0.5 - Mon Feb 06 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* Rewrite the algorithm of checking free space for download and install
* Add --shell option for convert to recreate image modified by internal shell
* Add -s option for chroot to unpack image
@@ -245,8 +256,8 @@ Release 0.5 - Mon Feb 06 2012
- Fix MANIFEST syntax to be compliant with md5sum
- Correct dependencies for mic in bootstrap
-Release 0.4 - Fri Jan 06 2012
-===========================================================
+Release 0.4 - Fri Jan 06 2012 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* Support bootstrap mode, run with '--runtime=bootstrap'
* Full support for taring-to output, use 'mic ch x.tar'
* Break dependency between backend and baseimage
@@ -256,8 +267,8 @@ Release 0.4 - Fri Jan 06 2012
* Fix NoneType 'createopts' when convert
* Fix no existed local_pkgs_path
-Release 0.3 - Mon Dec 26 2011
-===========================================================
+Release 0.3 - Mon Dec 26 2011 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* Unit test support, run 'make test'
* Enable proxy support in config file
* Refine configmgr and pluginmgr
@@ -271,8 +282,8 @@ Release 0.3 - Mon Dec 26 2011
- Add priority and cost option for repos
- Reinstroduced compress-disk-image option
-Release 0.2 - Tue Nov 29 2011
-===========================================================
+Release 0.2 - Tue Nov 29 2011 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* Support btrfs and ext4 fstype for creator, convertor, and chroot
* Append distfiles and Makefile
* Check arch type from repo data
@@ -284,8 +295,8 @@ Release 0.2 - Tue Nov 29 2011
* untrack mic/__version__.py
* Fix some minor issues
-Release 0.1 - Thu Oct 27 2011
-===========================================================
+Release 0.1 - Thu Oct 27 2011 - Gui Chen <gui.chen@intel.com>
+=====================================================================
* Support three subcommand: create, convert, chroot
* Support five image types: fs, loop, raw, livecd, liveusb
* Support two package manager backend: yum and zypp
diff --git a/VERSION b/VERSION
index 249afd5..caa4836 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-0.18.1
+0.19
diff --git a/debian/changelog b/debian/changelog
index e0d5401..33f263e 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,4 +1,16 @@
-mic (0.18.1-1) unstable; urgency=low
+mic (0.19-1) unstable; urgency=low
+ * new distribution support: Ubuntu 13.04 and openSUSE 12.3
+ * introduce '--part-type' to handle GPT partition
+ * copy bmap creation from bmap-tools
+ * update some depends and fix depends issue
+ * bug fix:
+ - fix bug autologinuser always set
+ - fix symlink bind mount left issue
+ - fix '/var/lock' non-existent throw traceback
+
+ -- Gui Chen <gui.chen@intel.com> Thu, 16 May 2013 17:25:35 +0800
+
+mic (0.18-1) unstable; urgency=low
* put build_id before image name for release option
* mount build directory as tmpfs to speed up
diff --git a/debian/control b/debian/control
index 278c369..86be578 100644
--- a/debian/control
+++ b/debian/control
@@ -24,7 +24,7 @@ Depends: ${misc:Depends}, ${python:Depends}, ${dist:Depends},
syslinux (>= 2:4.05),
extlinux (>= 2:4.05),
libzypp,
- python-zypp,
+ tizen-python-zypp-0.5.14,
python-m2crypto,
python-urlgrabber,
Recommends:
diff --git a/doc/RELEASE_NOTES b/doc/RELEASE_NOTES
index c1c748e..8c9c8f0 100644
--- a/doc/RELEASE_NOTES
+++ b/doc/RELEASE_NOTES
@@ -1,35 +1,25 @@
- MIC Image Creator 0.18 Release Notes
+ MIC Image Creator 0.19 Release Notes
===========================================================
-Released Apr 03 2013
+Released May 16 2013
This release note documents the changes included in the MIC 0.18 release. And
the release contains new features, enhancements and bug fixes.
New Features & Ehancements
--------------------------
- * put build_id before image name for release option
- * mount build directory as tmpfs to speed up
- * enable --priority in ks to set priority
- * upgrade qemu (mic's depends) to 1.4.0
+ * new distribution support: Ubuntu 13.04 and openSUSE 12.3
+ * introduce '--part-type' to handle GPT partition
+ * copy bmap creation from bmap-tools
+ * update some depends and fix depends issue
Bug Fixes
---------
- * fix debuginfo rpm swig attribute lost
- * fix release option failure with slash
- * fix man page lost in some distros
- * fix bmap file packed to tarball
-
-Code Cleanup
-------------
- * unify import statements to absolute import
- * clean up many undefined in partitionfs.py/loop.py/livecd.py
- * clean up some useless try and raise blocks
- * clean up some bad indentations
- * improve some error messages
+ * fix bug autologinuser always set
+ * fix symlink bind mount left issue
+ * fix '/var/lock' non-existent throw traceback
Resource
--------
-
* SITE: https://www.tizen.org/
* REPO: https://download.tizen.org/tools/
* DOCS: https://source.tizen.org/documentation/reference/mic-image-creator
diff --git a/mic/chroot.py b/mic/chroot.py
index 546c0c8..99fb9a2 100644
--- a/mic/chroot.py
+++ b/mic/chroot.py
@@ -142,6 +142,8 @@ def setup_chrootenv(chrootdir, bindmounts = None, mountparent = True):
"""Default bind mounts"""
for pt in BIND_MOUNTS:
+ if not os.path.exists(pt):
+ continue
chrootmounts.append(fs_related.BindChrootMount(pt,
chrootdir,
None))
diff --git a/mic/imager/raw.py b/mic/imager/raw.py
index 474a76f..8535d66 100644
--- a/mic/imager/raw.py
+++ b/mic/imager/raw.py
@@ -18,9 +18,6 @@
import os
import stat
import shutil
-from fcntl import ioctl
-from struct import pack, unpack
-from itertools import groupby
from mic import kickstart, msger
from mic.utils import fs_related, runner, misc
@@ -193,7 +190,8 @@ class RawImageCreator(BaseImageCreator):
p.label,
fsopts = p.fsopts,
boot = p.active,
- align = p.align)
+ align = p.align,
+ part_type = p.part_type)
self.__instloop.layout_partitions(self._ptable_format)
@@ -450,98 +448,8 @@ class RawImageCreator(BaseImageCreator):
cfg.write(xml)
cfg.close()
- def _bmap_file_start(self, block_size, image_size, blocks_cnt):
- """ A helper function which generates the starting contents of the
- block map file: the header comment, image size, block size, etc. """
-
- xml = "<?xml version=\"1.0\" ?>\n\n"
- xml += "<!-- This file contains block map for an image file. The block map\n"
- xml += " is basically a list of block numbers in the image file. It lists\n"
- xml += " only those blocks which contain data (boot sector, partition\n"
- xml += " table, file-system metadata, files, directories, extents, etc).\n"
- xml += " These blocks have to be copied to the target device. The other\n"
- xml += " blocks do not contain any useful data and do not have to be\n"
- xml += " copied to the target device. Thus, using the block map users can\n"
- xml += " flash the image fast. So the block map is just an optimization.\n"
- xml += " It is OK to ignore this file and just flash the entire image to\n"
- xml += " the target device if the flashing speed is not important.\n\n"
-
- xml += " Note, this file contains commentaries with useful information\n"
- xml += " like image size in gigabytes, percentage of mapped data, etc.\n"
- xml += " This data is there merely to make the XML file human-readable.\n\n"
-
- xml += " The 'version' attribute is the block map file format version in\n"
- xml += " the 'major.minor' format. The version major number is increased\n"
- xml += " whenever we make incompatible changes to the block map format,\n"
- xml += " meaning that the bmap-aware flasher would have to be modified in\n"
- xml += " order to support the new format. The minor version is increased\n"
- xml += " in case of compatible changes. For example, if we add an attribute\n"
- xml += " which is optional for the bmap-aware flasher. -->\n"
- xml += "<bmap version=\"1.1\">\n"
- xml += "\t<!-- Image size in bytes (%s) -->\n" \
- % misc.human_size(image_size)
- xml += "\t<ImageSize> %u </ImageSize>\n\n" % image_size
-
- xml += "\t<!-- Size of a block in bytes -->\n"
- xml += "\t<BlockSize> %u </BlockSize>\n\n" % block_size
-
- xml += "\t<!-- Count of blocks in the image file -->\n"
- xml += "\t<BlocksCount> %u </BlocksCount>\n\n" % blocks_cnt
-
- xml += "\t<!-- The block map which consists of elements which may either\n"
- xml += "\t be a range of blocks or a single block. The 'sha1' attribute\n"
- xml += "\t is the SHA1 checksum of the this range of blocks. -->\n"
- xml += "\t<BlockMap>\n"
-
- return xml
-
- def _bmap_file_end(self, mapped_cnt, block_size, blocks_cnt):
- """ A helper funstion which generates the final parts of the block map
- file: the ending tags and the information about the amount of mapped
- blocks. """
-
- xml = "\t</BlockMap>\n\n"
-
- size = misc.human_size(mapped_cnt * block_size)
- percent = (mapped_cnt * 100.0) / blocks_cnt
- xml += "\t<!-- Count of mapped blocks (%s or %.1f%% mapped) -->\n" \
- % (size, percent)
- xml += "\t<MappedBlocksCount> %u </MappedBlocksCount>\n" % mapped_cnt
- xml += "</bmap>"
-
- return xml
-
- def _get_ranges(self, f_image, blocks_cnt):
- """ A helper for 'generate_bmap()' which generates ranges of mapped
- blocks. It uses the FIBMAP ioctl to check which blocks are mapped. Of
- course, the image file must have been created as a sparse file
- originally, otherwise all blocks will be mapped. And it is also
- essential to generate the block map before the file had been copied
- anywhere or compressed, because othewise we lose the information about
- unmapped blocks. """
-
- def is_mapped(block):
- """ Returns True if block 'block' of the image file is mapped and
- False otherwise.
-
- Implementation details: this function uses the FIBMAP ioctl (number
- 1) to get detect whether 'block' is mapped to a disk block. The ioctl
- returns zero if 'block' is not mapped and non-zero disk block number
- if it is mapped. """
-
- return unpack('I', ioctl(f_image, 1, pack('I', block)))[0] != 0
-
- for key, group in groupby(xrange(blocks_cnt), is_mapped):
- if key:
- # Find the first and the last elements of the group
- first = group.next()
- last = first
- for last in group:
- pass
- yield first, last
-
def generate_bmap(self):
- """ Generate block map file for an image. The idea is that while disk
+ """ Generate block map file for the image. The idea is that while disk
images we generate may be large (e.g., 4GiB), they may actually contain
only little real data, e.g., 512MiB. This data are files, directories,
file-system meta-data, partition table, etc. In other words, when
@@ -551,14 +459,12 @@ class RawImageCreator(BaseImageCreator):
This function generates the block map file for an arbitrary image that
mic has generated. The block map file is basically an XML file which
contains a list of blocks which have to be copied to the target device.
- The other blocks are not used and there is no need to copy them.
-
- This function assumes the image file was originally created as a sparse
- file. To generate the block map we use the FIBMAP ioctl. """
+ The other blocks are not used and there is no need to copy them. """
if self.bmap_needed is None:
return
+ from mic.utils import BmapCreate
msger.info("Generating the map file(s)")
for name in self.__disks.keys():
@@ -567,33 +473,9 @@ class RawImageCreator(BaseImageCreator):
msger.debug("Generating block map file '%s'" % bmap_file)
- image_size = os.path.getsize(image)
-
- with open(bmap_file, "w") as f_bmap:
- with open(image, "rb") as f_image:
- # Get the block size of the host file-system for the image
- # file by calling the FIGETBSZ ioctl (number 2).
- block_size = unpack('I', ioctl(f_image, 2, pack('I', 0)))[0]
- blocks_cnt = (image_size + block_size - 1) / block_size
-
- # Write general information to the block map file, without
- # block map itself, which will be written next.
- xml = self._bmap_file_start(block_size, image_size,
- blocks_cnt)
- f_bmap.write(xml)
-
- # Generate the block map and write it to the XML block map
- # file as we go.
- mapped_cnt = 0
- for first, last in self._get_ranges(f_image, blocks_cnt):
- mapped_cnt += last - first + 1
- sha1 = misc.calc_hashes(image, ('sha1', ),
- first * block_size,
- (last + 1) * block_size)
- f_bmap.write("\t\t<Range sha1=\"%s\"> %s-%s " \
- "</Range>\n" % (sha1[0], first, last))
-
- # Finish the block map file
- xml = self._bmap_file_end(mapped_cnt, block_size,
- blocks_cnt)
- f_bmap.write(xml)
+ try:
+ creator = BmapCreate.BmapCreate(image, bmap_file)
+ creator.generate()
+ del creator
+ except BmapCreate.Error as err:
+ raise CreatorError("Failed to create bmap file: %s" % str(err))
diff --git a/mic/kickstart/custom_commands/partition.py b/mic/kickstart/custom_commands/partition.py
index bb63e10..59a87fb 100644
--- a/mic/kickstart/custom_commands/partition.py
+++ b/mic/kickstart/custom_commands/partition.py
@@ -26,15 +26,18 @@ class Mic_PartData(FC4_PartData):
self.deleteRemovedAttrs()
self.align = kwargs.get("align", None)
self.extopts = kwargs.get("extopts", None)
+ self.part_type = kwargs.get("part_type", None)
def _getArgsAsStr(self):
retval = FC4_PartData._getArgsAsStr(self)
if self.align:
retval += " --align"
-
if self.extopts:
retval += " --extoptions=%s" % self.extopts
+ if self.part_type:
+ retval += " --part-type=%s" % self.part_type
+
return retval
class Mic_Partition(FC4_Partition):
@@ -49,4 +52,6 @@ class Mic_Partition(FC4_Partition):
default=None)
op.add_option("--extoptions", type="string", action="store", dest="extopts",
default=None)
+ op.add_option("--part-type", type="string", action="store", dest="part_type",
+ default=None)
return op
diff --git a/mic/utils/BmapCreate.py b/mic/utils/BmapCreate.py
new file mode 100644
index 0000000..65b19a5
--- /dev/null
+++ b/mic/utils/BmapCreate.py
@@ -0,0 +1,298 @@
+""" This module implements the block map (bmap) creation functionality and
+provides the corresponding API in form of the 'BmapCreate' class.
+
+The idea is that while images files may generally be very large (e.g., 4GiB),
+they may nevertheless contain only little real data, e.g., 512MiB. This data
+are files, directories, file-system meta-data, partition table, etc. When
+copying the image to the target device, you do not have to copy all the 4GiB of
+data, you can copy only 512MiB of it, which is 4 times less, so copying should
+presumably be 4 times faster.
+
+The block map file is an XML file which contains a list of blocks which have to
+be copied to the target device. The other blocks are not used and there is no
+need to copy them. The XML file also contains some additional information like
+block size, image size, count of mapped blocks, etc. There are also many
+commentaries, so it is human-readable.
+
+The image has to be a sparse file. Generally, this means that when you generate
+this image file, you should start with a huge sparse file which contains a
+single hole spanning the entire file. Then you should partition it, write all
+the data (probably by means of loop-back mounting the image or parts of it),
+etc. The end result should be a sparse file where mapped areas represent useful
+parts of the image and holes represent useless parts of the image, which do not
+have to be copied when copying the image to the target device.
+
+This module uses the FIBMAP ioctl to detect holes. """
+
+# Disable the following pylint recommendations:
+# * Too many instance attributes - R0902
+# * Too few public methods - R0903
+# pylint: disable=R0902,R0903
+
+import hashlib
+from mic.utils.misc import human_size
+from mic.utils import Fiemap
+
+# The bmap format version we generate
+SUPPORTED_BMAP_VERSION = "1.3"
+
+_BMAP_START_TEMPLATE = \
+"""<?xml version="1.0" ?>
+<!-- This file contains the block map for an image file, which is basically
+ a list of useful (mapped) block numbers in the image file. In other words,
+ it lists only those blocks which contain data (boot sector, partition
+ table, file-system metadata, files, directories, extents, etc). These
+ blocks have to be copied to the target device. The other blocks do not
+ contain any useful data and do not have to be copied to the target
+ device.
+
+ The block map an optimization which allows to copy or flash the image to
+ the image quicker than copying of flashing the entire image. This is
+ because with bmap less data is copied: <MappedBlocksCount> blocks instead
+ of <BlocksCount> blocks.
+
+ Besides the machine-readable data, this file contains useful commentaries
+ which contain human-readable information like image size, percentage of
+ mapped data, etc.
+
+ The 'version' attribute is the block map file format version in the
+ 'major.minor' format. The version major number is increased whenever an
+ incompatible block map format change is made. The minor number changes
+ in case of minor backward-compatible changes. -->
+
+<bmap version="%s">
+ <!-- Image size in bytes: %s -->
+ <ImageSize> %u </ImageSize>
+
+ <!-- Size of a block in bytes -->
+ <BlockSize> %u </BlockSize>
+
+ <!-- Count of blocks in the image file -->
+ <BlocksCount> %u </BlocksCount>
+
+"""
+
+class Error(Exception):
+ """ A class for exceptions generated by this module. We currently support
+ only one type of exceptions, and we basically throw human-readable problem
+ description in case of errors. """
+ pass
+
+class BmapCreate:
+ """ This class implements the bmap creation functionality. To generate a
+ bmap for an image (which is supposedly a sparse file), you should first
+ create an instance of 'BmapCreate' and provide:
+
+ * full path or a file-like object of the image to create bmap for
+ * full path or a file object to use for writing the results to
+
+ Then you should invoke the 'generate()' method of this class. It will use
+ the FIEMAP ioctl to generate the bmap. """
+
+ def _open_image_file(self):
+ """ Open the image file. """
+
+ try:
+ self._f_image = open(self._image_path, 'rb')
+ except IOError as err:
+ raise Error("cannot open image file '%s': %s" \
+ % (self._image_path, err))
+
+ self._f_image_needs_close = True
+
+ def _open_bmap_file(self):
+ """ Open the bmap file. """
+
+ try:
+ self._f_bmap = open(self._bmap_path, 'w+')
+ except IOError as err:
+ raise Error("cannot open bmap file '%s': %s" \
+ % (self._bmap_path, err))
+
+ self._f_bmap_needs_close = True
+
+ def __init__(self, image, bmap):
+ """ Initialize a class instance:
+ * image - full path or a file-like object of the image to create bmap
+ for
+ * bmap - full path or a file object to use for writing the resulting
+ bmap to """
+
+ self.image_size = None
+ self.image_size_human = None
+ self.block_size = None
+ self.blocks_cnt = None
+ self.mapped_cnt = None
+ self.mapped_size = None
+ self.mapped_size_human = None
+ self.mapped_percent = None
+
+ self._mapped_count_pos1 = None
+ self._mapped_count_pos2 = None
+ self._sha1_pos = None
+
+ self._f_image_needs_close = False
+ self._f_bmap_needs_close = False
+
+ if hasattr(image, "read"):
+ self._f_image = image
+ self._image_path = image.name
+ else:
+ self._image_path = image
+ self._open_image_file()
+
+ if hasattr(bmap, "read"):
+ self._f_bmap = bmap
+ self._bmap_path = bmap.name
+ else:
+ self._bmap_path = bmap
+ self._open_bmap_file()
+
+ self.fiemap = Fiemap.Fiemap(self._f_image)
+
+ self.image_size = self.fiemap.image_size
+ self.image_size_human = human_size(self.image_size)
+ if self.image_size == 0:
+ raise Error("cannot generate bmap for zero-sized image file '%s'" \
+ % self._image_path)
+
+ self.block_size = self.fiemap.block_size
+ self.blocks_cnt = self.fiemap.blocks_cnt
+
+ def _bmap_file_start(self):
+ """ A helper function which generates the starting contents of the
+ block map file: the header comment, image size, block size, etc. """
+
+ # We do not know the amount of mapped blocks at the moment, so just put
+ # whitespaces instead of real numbers. Assume the longest possible
+ # numbers.
+ mapped_count = ' ' * len(str(self.image_size))
+ mapped_size_human = ' ' * len(self.image_size_human)
+
+ xml = _BMAP_START_TEMPLATE \
+ % (SUPPORTED_BMAP_VERSION, self.image_size_human,
+ self.image_size, self.block_size, self.blocks_cnt)
+ xml += " <!-- Count of mapped blocks: "
+
+ self._f_bmap.write(xml)
+ self._mapped_count_pos1 = self._f_bmap.tell()
+
+ # Just put white-spaces instead of real information about mapped blocks
+ xml = "%s or %.1f -->\n" % (mapped_size_human, 100.0)
+ xml += " <MappedBlocksCount> "
+
+ self._f_bmap.write(xml)
+ self._mapped_count_pos2 = self._f_bmap.tell()
+
+ xml = "%s </MappedBlocksCount>\n\n" % mapped_count
+
+ # pylint: disable=C0301
+ xml += " <!-- The checksum of this bmap file. When it is calculated, the value of\n"
+ xml += " the SHA1 checksum has be zeoro (40 ASCII \"0\" symbols). -->\n"
+ xml += " <BmapFileSHA1> "
+
+ self._f_bmap.write(xml)
+ self._sha1_pos = self._f_bmap.tell()
+
+ xml = "0" * 40 + " </BmapFileSHA1>\n\n"
+ xml += " <!-- The block map which consists of elements which may either be a\n"
+ xml += " range of blocks or a single block. The 'sha1' attribute (if present)\n"
+ xml += " is the SHA1 checksum of this blocks range. -->\n"
+ xml += " <BlockMap>\n"
+ # pylint: enable=C0301
+
+ self._f_bmap.write(xml)
+
+ def _bmap_file_end(self):
+ """ A helper function which generates the final parts of the block map
+ file: the ending tags and the information about the amount of mapped
+ blocks. """
+
+ xml = " </BlockMap>\n"
+ xml += "</bmap>\n"
+
+ self._f_bmap.write(xml)
+
+ self._f_bmap.seek(self._mapped_count_pos1)
+ self._f_bmap.write("%s or %.1f%%" % \
+ (self.mapped_size_human, self.mapped_percent))
+
+ self._f_bmap.seek(self._mapped_count_pos2)
+ self._f_bmap.write("%u" % self.mapped_cnt)
+
+ self._f_bmap.seek(0)
+ sha1 = hashlib.sha1(self._f_bmap.read()).hexdigest()
+ self._f_bmap.seek(self._sha1_pos)
+ self._f_bmap.write("%s" % sha1)
+
+ def _calculate_sha1(self, first, last):
+ """ A helper function which calculates SHA1 checksum for the range of
+ blocks of the image file: from block 'first' to block 'last'. """
+
+ start = first * self.block_size
+ end = (last + 1) * self.block_size
+
+ self._f_image.seek(start)
+ hash_obj = hashlib.new("sha1")
+
+ chunk_size = 1024*1024
+ to_read = end - start
+ read = 0
+
+ while read < to_read:
+ if read + chunk_size > to_read:
+ chunk_size = to_read - read
+ chunk = self._f_image.read(chunk_size)
+ hash_obj.update(chunk)
+ read += chunk_size
+
+ return hash_obj.hexdigest()
+
+ def generate(self, include_checksums = True):
+ """ Generate bmap for the image file. If 'include_checksums' is 'True',
+ also generate SHA1 checksums for block ranges. """
+
+ # Save image file position in order to restore it at the end
+ image_pos = self._f_image.tell()
+
+ self._bmap_file_start()
+
+ # Generate the block map and write it to the XML block map
+ # file as we go.
+ self.mapped_cnt = 0
+ for first, last in self.fiemap.get_mapped_ranges(0, self.blocks_cnt):
+ self.mapped_cnt += last - first + 1
+ if include_checksums:
+ sha1 = self._calculate_sha1(first, last)
+ sha1 = " sha1=\"%s\"" % sha1
+ else:
+ sha1 = ""
+
+ if first != last:
+ self._f_bmap.write(" <Range%s> %s-%s </Range>\n" \
+ % (sha1, first, last))
+ else:
+ self._f_bmap.write(" <Range%s> %s </Range>\n" \
+ % (sha1, first))
+
+ self.mapped_size = self.mapped_cnt * self.block_size
+ self.mapped_size_human = human_size(self.mapped_size)
+ self.mapped_percent = (self.mapped_cnt * 100.0) / self.blocks_cnt
+
+ self._bmap_file_end()
+
+ try:
+ self._f_bmap.flush()
+ except IOError as err:
+ raise Error("cannot flush the bmap file '%s': %s" \
+ % (self._bmap_path, err))
+
+ self._f_image.seek(image_pos)
+
+ def __del__(self):
+ """ The class destructor which closes the opened files. """
+
+ if self._f_image_needs_close:
+ self._f_image.close()
+ if self._f_bmap_needs_close:
+ self._f_bmap.close()
diff --git a/mic/utils/Fiemap.py b/mic/utils/Fiemap.py
new file mode 100644
index 0000000..f2db6ff
--- /dev/null
+++ b/mic/utils/Fiemap.py
@@ -0,0 +1,252 @@
+""" This module implements python API for the FIEMAP ioctl. The FIEMAP ioctl
+allows to find holes and mapped areas in a file. """
+
+# Note, a lot of code in this module is not very readable, because it deals
+# with the rather complex FIEMAP ioctl. To understand the code, you need to
+# know the FIEMAP interface, which is documented in the
+# Documentation/filesystems/fiemap.txt file in the Linux kernel sources.
+
+# Disable the following pylint recommendations:
+# * Too many instance attributes (R0902)
+# pylint: disable=R0902
+
+import os
+import struct
+import array
+import fcntl
+from mic.utils.misc import get_block_size
+
+# Format string for 'struct fiemap'
+_FIEMAP_FORMAT = "=QQLLLL"
+# sizeof(struct fiemap)
+_FIEMAP_SIZE = struct.calcsize(_FIEMAP_FORMAT)
+# Format string for 'struct fiemap_extent'
+_FIEMAP_EXTENT_FORMAT = "=QQQQQLLLL"
+# sizeof(struct fiemap_extent)
+_FIEMAP_EXTENT_SIZE = struct.calcsize(_FIEMAP_EXTENT_FORMAT)
+# The FIEMAP ioctl number
+_FIEMAP_IOCTL = 0xC020660B
+
+# Minimum buffer which is required for 'class Fiemap' to operate
+MIN_BUFFER_SIZE = _FIEMAP_SIZE + _FIEMAP_EXTENT_SIZE
+# The default buffer size for 'class Fiemap'
+DEFAULT_BUFFER_SIZE = 256 * 1024
+
+class Error(Exception):
+ """ A class for exceptions generated by this module. We currently support
+ only one type of exceptions, and we basically throw human-readable problem
+ description in case of errors. """
+ pass
+
+class Fiemap:
+ """ This class provides API to the FIEMAP ioctl. Namely, it allows to
+ iterate over all mapped blocks and over all holes. """
+
+ def _open_image_file(self):
+ """ Open the image file. """
+
+ try:
+ self._f_image = open(self._image_path, 'rb')
+ except IOError as err:
+ raise Error("cannot open image file '%s': %s" \
+ % (self._image_path, err))
+
+ self._f_image_needs_close = True
+
+ def __init__(self, image, buf_size = DEFAULT_BUFFER_SIZE):
+ """ Initialize a class instance. The 'image' argument is full path to
+ the file to operate on, or a file object to operate on.
+
+ The 'buf_size' argument is the size of the buffer for 'struct
+ fiemap_extent' elements which will be used when invoking the FIEMAP
+ ioctl. The larger is the buffer, the less times the FIEMAP ioctl will
+ be invoked. """
+
+ self._f_image_needs_close = False
+
+ if hasattr(image, "fileno"):
+ self._f_image = image
+ self._image_path = image.name
+ else:
+ self._image_path = image
+ self._open_image_file()
+
+ # Validate 'buf_size'
+ if buf_size < MIN_BUFFER_SIZE:
+ raise Error("too small buffer (%d bytes), minimum is %d bytes" \
+ % (buf_size, MIN_BUFFER_SIZE))
+
+ # How many 'struct fiemap_extent' elements fit the buffer
+ buf_size -= _FIEMAP_SIZE
+ self._fiemap_extent_cnt = buf_size / _FIEMAP_EXTENT_SIZE
+ self._buf_size = self._fiemap_extent_cnt * _FIEMAP_EXTENT_SIZE
+ self._buf_size += _FIEMAP_SIZE
+
+ # Allocate a mutable buffer for the FIEMAP ioctl
+ self._buf = array.array('B', [0] * self._buf_size)
+
+ self.image_size = os.fstat(self._f_image.fileno()).st_size
+
+ try:
+ self.block_size = get_block_size(self._f_image)
+ except IOError as err:
+ raise Error("cannot get block size for '%s': %s" \
+ % (self._image_path, err))
+
+ self.blocks_cnt = self.image_size + self.block_size - 1
+ self.blocks_cnt /= self.block_size
+
+ # Synchronize the image file to make sure FIEMAP returns correct values
+ try:
+ self._f_image.flush()
+ except IOError as err:
+ raise Error("cannot flush image file '%s': %s" \
+ % (self._image_path, err))
+ try:
+ os.fsync(self._f_image.fileno()),
+ except OSError as err:
+ raise Error("cannot synchronize image file '%s': %s " \
+ % (self._image_path, err.strerror))
+
+ # Check if the FIEMAP ioctl is supported
+ self.block_is_mapped(0)
+
+ def __del__(self):
+ """ The class destructor which closes the opened files. """
+
+ if self._f_image_needs_close:
+ self._f_image.close()
+
+ def _invoke_fiemap(self, block, count):
+ """ Invoke the FIEMAP ioctl for 'count' blocks of the file starting from
+ block number 'block'.
+
+ The full result of the operation is stored in 'self._buf' on exit.
+ Returns the unpacked 'struct fiemap' data structure in form of a python
+ list (just like 'struct.upack()'). """
+
+ if block < 0 or block >= self.blocks_cnt:
+ raise Error("bad block number %d, should be within [0, %d]" \
+ % (block, self.blocks_cnt))
+
+ # Initialize the 'struct fiemap' part of the buffer
+ struct.pack_into(_FIEMAP_FORMAT, self._buf, 0, block * self.block_size,
+ count * self.block_size, 0, 0,
+ self._fiemap_extent_cnt, 0)
+
+ try:
+ fcntl.ioctl(self._f_image, _FIEMAP_IOCTL, self._buf, 1)
+ except IOError as err:
+ error_msg = "the FIEMAP ioctl failed for '%s': %s" \
+ % (self._image_path, err)
+ if err.errno == os.errno.EPERM or err.errno == os.errno.EACCES:
+ # The FIEMAP ioctl was added in kernel version 2.6.28 in 2008
+ error_msg += " (looks like your kernel does not support FIEMAP)"
+
+ raise Error(error_msg)
+
+ return struct.unpack(_FIEMAP_FORMAT, self._buf[:_FIEMAP_SIZE])
+
+ def block_is_mapped(self, block):
+ """ This function returns 'True' if block number 'block' of the image
+ file is mapped and 'False' otherwise. """
+
+ struct_fiemap = self._invoke_fiemap(block, 1)
+
+ # The 3rd element of 'struct_fiemap' is the 'fm_mapped_extents' field.
+ # If it contains zero, the block is not mapped, otherwise it is
+ # mapped.
+ return bool(struct_fiemap[3])
+
+ def block_is_unmapped(self, block):
+ """ This function returns 'True' if block number 'block' of the image
+ file is not mapped (hole) and 'False' otherwise. """
+
+ return not self.block_is_mapped(block)
+
+ def _unpack_fiemap_extent(self, index):
+ """ Unpack a 'struct fiemap_extent' structure object number 'index'
+ from the internal 'self._buf' buffer. """
+
+ offset = _FIEMAP_SIZE + _FIEMAP_EXTENT_SIZE * index
+ return struct.unpack(_FIEMAP_EXTENT_FORMAT,
+ self._buf[offset : offset + _FIEMAP_EXTENT_SIZE])
+
+ def _do_get_mapped_ranges(self, start, count):
+ """ Implements most the functionality for the 'get_mapped_ranges()'
+ generator: invokes the FIEMAP ioctl, walks through the mapped
+ extents and yields mapped block ranges. However, the ranges may be
+ consecutive (e.g., (1, 100), (100, 200)) and 'get_mapped_ranges()'
+ simply merges them. """
+
+ block = start
+ while block < start + count:
+ struct_fiemap = self._invoke_fiemap(block, count)
+
+ mapped_extents = struct_fiemap[3]
+ if mapped_extents == 0:
+ # No more mapped blocks
+ return
+
+ extent = 0
+ while extent < mapped_extents:
+ fiemap_extent = self._unpack_fiemap_extent(extent)
+
+ # Start of the extent
+ extent_start = fiemap_extent[0]
+ # Starting block number of the extent
+ extent_block = extent_start / self.block_size
+ # Length of the extent
+ extent_len = fiemap_extent[2]
+ # Count of blocks in the extent
+ extent_count = extent_len / self.block_size
+
+ # Extent length and offset have to be block-aligned
+ assert extent_start % self.block_size == 0
+ assert extent_len % self.block_size == 0
+
+ if extent_block > start + count - 1:
+ return
+
+ first = max(extent_block, block)
+ last = min(extent_block + extent_count, start + count) - 1
+ yield (first, last)
+
+ extent += 1
+
+ block = extent_block + extent_count
+
+ def get_mapped_ranges(self, start, count):
+ """ A generator which yields ranges of mapped blocks in the file. The
+ ranges are tuples of 2 elements: [first, last], where 'first' is the
+ first mapped block and 'last' is the last mapped block.
+
+ The ranges are yielded for the area of the file of size 'count' blocks,
+ starting from block 'start'. """
+
+ iterator = self._do_get_mapped_ranges(start, count)
+
+ first_prev, last_prev = iterator.next()
+
+ for first, last in iterator:
+ if last_prev == first - 1:
+ last_prev = last
+ else:
+ yield (first_prev, last_prev)
+ first_prev, last_prev = first, last
+
+ yield (first_prev, last_prev)
+
+ def get_unmapped_ranges(self, start, count):
+ """ Just like 'get_mapped_ranges()', but yields unmapped block ranges
+ instead (holes). """
+
+ hole_first = start
+ for first, last in self._do_get_mapped_ranges(start, count):
+ if first > hole_first:
+ yield (hole_first, first - 1)
+
+ hole_first = last + 1
+
+ if hole_first < start + count:
+ yield (hole_first, start + count - 1)
diff --git a/mic/utils/fs_related.py b/mic/utils/fs_related.py
index b3ca7ec..56b9a4f 100644
--- a/mic/utils/fs_related.py
+++ b/mic/utils/fs_related.py
@@ -107,13 +107,19 @@ def my_fuser(fp):
class BindChrootMount:
"""Represents a bind mount of a directory into a chroot."""
def __init__(self, src, chroot, dest = None, option = None):
- self.src = src
self.root = os.path.abspath(os.path.expanduser(chroot))
self.option = option
+ self.orig_src = self.src = src
+ if os.path.islink(src):
+ self.src = os.readlink(src)
+ if not self.src.startswith('/'):
+ self.src = os.path.abspath(os.path.join(os.path.dirname(src),
+ self.src))
+
if not dest:
- dest = src
- self.dest = self.root + "/" + dest
+ dest = self.src
+ self.dest = os.path.join(self.root, dest.lstrip('/'))
self.mounted = False
self.mountcmd = find_binary_path("mount")
@@ -144,7 +150,12 @@ class BindChrootMount:
rc = runner.show([self.mountcmd, "--bind", "-o", "remount,%s" % self.option, self.dest])
if rc != 0:
raise MountError("Bind-remounting '%s' failed" % self.dest)
+
self.mounted = True
+ if os.path.islink(self.orig_src):
+ dest = os.path.join(self.root, self.orig_src.lstrip('/'))
+ if not os.path.exists(dest):
+ os.symlink(self.src, dest)
def unmount(self):
if self.has_chroot_instance():
@@ -863,6 +874,9 @@ class LoopDevice(object):
def _genloopid(self):
import glob
+ if not glob.glob("/dev/loop[0-9]*"):
+ return 10
+
fint = lambda x: x[9:].isdigit() and int(x[9:]) or 0
maxid = 1 + max(filter(lambda x: x<100,
map(fint, glob.glob("/dev/loop[0-9]*"))))
@@ -940,10 +954,15 @@ class LoopDevice(object):
os.unlink(self.device)
DEVICE_PIDFILE_DIR = "/var/tmp/mic/device"
+DEVICE_LOCKFILE = "/var/lock/__mic_loopdev.lock"
def get_loop_device(losetupcmd, lofile):
+ global DEVICE_PIDFILE_DIR
+ global DEVICE_LOCKFILE
+
import fcntl
- fp = open("/var/lock/__mic_loopdev.lock", 'w')
+ makedirs(os.path.dirname(DEVICE_LOCKFILE))
+ fp = open(DEVICE_LOCKFILE, 'w')
fcntl.flock(fp, fcntl.LOCK_EX)
try:
loopdev = None
@@ -984,7 +1003,7 @@ def get_loop_device(losetupcmd, lofile):
try:
fcntl.flock(fp, fcntl.LOCK_UN)
fp.close()
- os.unlink('/var/lock/__mic_loopdev.lock')
+ os.unlink(DEVICE_LOCKFILE)
except:
pass
diff --git a/mic/utils/gpt_parser.py b/mic/utils/gpt_parser.py
index cbf1097..5d43b70 100644
--- a/mic/utils/gpt_parser.py
+++ b/mic/utils/gpt_parser.py
@@ -20,10 +20,14 @@ GPT header and the GPT partition table. """
import struct
import uuid
+import binascii
from mic.utils.errors import MountError
-GPT_HEADER_FORMAT = "<8sIIIIQQQQ16sQIII420x"
-GPT_ENTRY_FORMAT = "<16s16sQQQ72s"
+_GPT_HEADER_FORMAT = "<8s4sIIIQQQQ16sQIII"
+_GPT_HEADER_SIZE = struct.calcsize(_GPT_HEADER_FORMAT)
+_GPT_ENTRY_FORMAT = "<16s16sQQQ72s"
+_GPT_ENTRY_SIZE = struct.calcsize(_GPT_ENTRY_FORMAT)
+_SUPPORTED_GPT_REVISION = '\x00\x00\x01\x00'
def _stringify_uuid(binary_uuid):
""" A small helper function to transform a binary UUID into a string
@@ -33,14 +37,50 @@ def _stringify_uuid(binary_uuid):
return uuid_str.upper()
+def _calc_header_crc(raw_hdr):
+ """ Calculate GPT header CRC32 checksum. The 'raw_hdr' parameter has to
+ be a list or a tuple containing all the elements of the GPT header in a
+ "raw" form, meaning that it should simply contain "unpacked" disk data.
+ """
+
+ raw_hdr = list(raw_hdr)
+ raw_hdr[3] = 0
+ raw_hdr = struct.pack(_GPT_HEADER_FORMAT, *raw_hdr)
+
+ return binascii.crc32(raw_hdr) & 0xFFFFFFFF
+
+def _validate_header(raw_hdr):
+ """ Validate the GPT header. The 'raw_hdr' parameter has to be a list or a
+ tuple containing all the elements of the GPT header in a "raw" form,
+ meaning that it should simply contain "unpacked" disk data. """
+
+ # Validate the signature
+ if raw_hdr[0] != 'EFI PART':
+ raise MountError("GPT partition table not found")
+
+ # Validate the revision
+ if raw_hdr[1] != _SUPPORTED_GPT_REVISION:
+ raise MountError("Unsupported GPT revision '%s', supported revision " \
+ "is '%s'" % \
+ (binascii.hexlify(raw_hdr[1]),
+ binascii.hexlify(_SUPPORTED_GPT_REVISION)))
+
+ # Validate header size
+ if raw_hdr[2] != _GPT_HEADER_SIZE:
+ raise MountError("Bad GPT header size: %d bytes, expected %d" % \
+ (raw_hdr[2], _GPT_HEADER_SIZE))
+
+ crc = _calc_header_crc(raw_hdr)
+ if raw_hdr[3] != crc:
+ raise MountError("GPT header crc mismatch: %#x, should be %#x" % \
+ (crc, raw_hdr[3]))
+
class GptParser:
- """ GPT partition table parser. The current implementation is simplified
- and it assumes that the partition table is correct, so it does not check
- the CRC-32 checksums and does not handle the backup GPT partition table.
- But this implementation can be extended in the future, if needed. """
+ """ GPT partition table parser. Allows reading the GPT header and the
+ partition table, as well as modifying the partition table records. """
def __init__(self, disk_path, sector_size = 512):
- """ The class construcor which accepts the following parameters:
+ """ The class constructor which accepts the following parameters:
* disk_path - full path to the disk image or device node
* sector_size - size of a disk sector in bytes """
@@ -48,7 +88,7 @@ class GptParser:
self.disk_path = disk_path
try:
- self.disk_obj = open(disk_path, 'rb')
+ self._disk_obj = open(disk_path, 'r+b')
except IOError as err:
raise MountError("Cannot open file '%s' for reading GPT " \
"partitions: %s" % (disk_path, err))
@@ -56,77 +96,236 @@ class GptParser:
def __del__(self):
""" The class destructor. """
- self.disk_obj.close()
+ self._disk_obj.close()
+
+ def _read_disk(self, offset, size):
+ """ A helper function which reads 'size' bytes from offset 'offset' of
+ the disk and checks all the error conditions. """
- def read_header(self):
- """ Read and verify the GPT header and return a tuple containing the
- following elements:
+ self._disk_obj.seek(offset)
+ try:
+ data = self._disk_obj.read(size)
+ except IOError as err:
+ raise MountError("cannot read from '%s': %s" % \
+ (self.disk_path, err))
+
+ if len(data) != size:
+ raise MountError("cannot read %d bytes from offset '%d' of '%s', " \
+ "read only %d bytes" % \
+ (size, offset, self.disk_path, len(data)))
+
+ return data
- (Signature, Revision, Header size in bytes, header CRC32, Current LBA,
- Backup LBA, First usable LBA for partitions, Last usable LBA, Disk GUID,
- Starting LBA of array of partition entries, Number of partition entries,
- Size of a single partition entry, CRC32 of partition array)
+ def _write_disk(self, offset, buf):
+ """ A helper function which writes buffer 'buf' to offset 'offset' of
+ the disk. This function takes care of unaligned writes and checks all
+ the error conditions. """
- This tuple corresponds to the GPT header format. Please, see the UEFI
- standard for the description of these fields. """
+ # Since we may be dealing with a block device, we only can write in
+ # 'self.sector_size' chunks. Find the aligned starting and ending
+ # disk offsets to read.
+ start = (offset / self.sector_size) * self.sector_size
+ end = ((start + len(buf)) / self.sector_size + 1) * self.sector_size
- # The header sits at LBA 1 - read it
- self.disk_obj.seek(self.sector_size)
+ data = self._read_disk(start, end - start)
+ off = offset - start
+ data = data[:off] + buf + data[off + len(buf):]
+
+ self._disk_obj.seek(start)
try:
- header = self.disk_obj.read(struct.calcsize(GPT_HEADER_FORMAT))
+ self._disk_obj.write(data)
except IOError as err:
- raise MountError("cannot read from file '%s': %s" % \
- (self.disk_path, err))
+ raise MountError("cannot write to '%s': %s" % (self.disk_path, err))
+
+ def read_header(self, primary = True):
+ """ Read and verify the GPT header and return a dictionary containing
+ the following elements:
+
+ 'signature' : header signature
+ 'revision' : header revision
+ 'hdr_size' : header size in bytes
+ 'hdr_crc' : header CRC32
+ 'hdr_lba' : LBA of this header
+ 'hdr_offs' : byte disk offset of this header
+ 'backup_lba' : backup header LBA
+ 'backup_offs' : byte disk offset of backup header
+ 'first_lba' : first usable LBA for partitions
+ 'first_offs' : first usable byte disk offset for partitions
+ 'last_lba' : last usable LBA for partitions
+ 'last_offs' : last usable byte disk offset for partitions
+ 'disk_uuid' : UUID of the disk
+ 'ptable_lba' : starting LBA of array of partition entries
+ 'ptable_offs' : disk byte offset of the start of the partition table
+ 'ptable_size' : partition table size in bytes
+ 'entries_cnt' : number of available partition table entries
+ 'entry_size' : size of a single partition entry
+ 'ptable_crc' : CRC32 of the partition table
+ 'primary' : a boolean, if 'True', this is the primary GPT header,
+ if 'False' - the secondary
+ 'primary_str' : contains string "primary" if this is the primary GPT
+ header, and "backup" otherwise
- header = struct.unpack(GPT_HEADER_FORMAT, header)
+ This dictionary corresponds to the GPT header format. Please, see the
+ UEFI standard for the description of these fields.
- # Perform a simple validation
- if header[0] != 'EFI PART':
- raise MountError("GPT paritition table on disk '%s' not found" % \
- self.disk_path)
+ If the 'primary' parameter is 'True', the primary GPT header is read,
+ otherwise the backup GPT header is read instead. """
- return (header[0], # 0. Signature
- header[1], # 1. Revision
- header[2], # 2. Header size in bytes
- header[3], # 3. Header CRC32
- header[5], # 4. Current LBA
- header[6], # 5. Backup LBA
- header[7], # 6. First usable LBA for partitions
- header[8], # 7. Last usable LBA
- _stringify_uuid(header[9]), # 8. Disk GUID
- header[10], # 9. Starting LBA of array of partition entries
- header[11], # 10. Number of partition entries
- header[12], # 11. Size of a single partition entry
- header[13]) # 12. CRC32 of partition array
+ # Read and validate the primary GPT header
+ raw_hdr = self._read_disk(self.sector_size, _GPT_HEADER_SIZE)
+ raw_hdr = struct.unpack(_GPT_HEADER_FORMAT, raw_hdr)
+ _validate_header(raw_hdr)
+ primary_str = "primary"
- def get_partitions(self):
- """ This is a generator which parses teh GPT partition table and
- generates the following tupes for each partition:
+ if not primary:
+ # Read and validate the backup GPT header
+ raw_hdr = self._read_disk(raw_hdr[6] * self.sector_size, _GPT_HEADER_SIZE)
+ raw_hdr = struct.unpack(_GPT_HEADER_FORMAT, raw_hdr)
+ _validate_header(raw_hdr)
+ primary_str = "backup"
- (Partition type GUID, Partition GUID, First LBA, Last LBA,
- Attribute flags, Partition name)
+ return { 'signature' : raw_hdr[0],
+ 'revision' : raw_hdr[1],
+ 'hdr_size' : raw_hdr[2],
+ 'hdr_crc' : raw_hdr[3],
+ 'hdr_lba' : raw_hdr[5],
+ 'hdr_offs' : raw_hdr[5] * self.sector_size,
+ 'backup_lba' : raw_hdr[6],
+ 'backup_offs' : raw_hdr[6] * self.sector_size,
+ 'first_lba' : raw_hdr[7],
+ 'first_offs' : raw_hdr[7] * self.sector_size,
+ 'last_lba' : raw_hdr[8],
+ 'last_offs' : raw_hdr[8] * self.sector_size,
+ 'disk_uuid' :_stringify_uuid(raw_hdr[9]),
+ 'ptable_lba' : raw_hdr[10],
+ 'ptable_offs' : raw_hdr[10] * self.sector_size,
+ 'ptable_size' : raw_hdr[11] * raw_hdr[12],
+ 'entries_cnt' : raw_hdr[11],
+ 'entry_size' : raw_hdr[12],
+ 'ptable_crc' : raw_hdr[13],
+ 'primary' : primary,
+ 'primary_str' : primary_str }
- This tuple corresponds to the GPT partition record format. Please, see the
- UEFI standard for the description of these fields. """
+ def _read_raw_ptable(self, header):
+ """ Read and validate primary or backup partition table. The 'header'
+ argument is the GPT header. If it is the primary GPT header, then the
+ primary partition table is read and validated, otherwise - the backup
+ one. The 'header' argument is a dictionary which is returned by the
+ 'read_header()' method. """
- gpt_header = self.read_header()
- entries_start = gpt_header[9] * self.sector_size
- entries_count = gpt_header[10]
+ raw_ptable = self._read_disk(header['ptable_offs'],
+ header['ptable_size'])
- self.disk_obj.seek(entries_start)
+ crc = binascii.crc32(raw_ptable) & 0xFFFFFFFF
+ if crc != header['ptable_crc']:
+ raise MountError("Partition table at LBA %d (%s) is corrupted" % \
+ (header['ptable_lba'], header['primary_str']))
- for _ in xrange(0, entries_count):
- entry = self.disk_obj.read(struct.calcsize(GPT_ENTRY_FORMAT))
- entry = struct.unpack(GPT_ENTRY_FORMAT, entry)
+ return raw_ptable
- if entry[2] == 0 or entry[3] == 0:
+ def get_partitions(self, primary = True):
+ """ This is a generator which parses the GPT partition table and
+ generates the following dictionary for each partition:
+
+ 'index' : the index of the partition table endry
+ 'offs' : byte disk offset of the partition table entry
+ 'type_uuid' : partition type UUID
+ 'part_uuid' : partition UUID
+ 'first_lba' : the first LBA
+ 'last_lba' : the last LBA
+ 'flags' : attribute flags
+ 'name' : partition name
+ 'primary' : a boolean, if 'True', this is the primary partition
+ table, if 'False' - the secondary
+ 'primary_str' : contains string "primary" if this is the primary GPT
+ header, and "backup" otherwise
+
+ This dictionary corresponds to the GPT header format. Please, see the
+ UEFI standard for the description of these fields.
+
+ If the 'primary' parameter is 'True', partitions from the primary GPT
+ partition table are generated, otherwise partitions from the backup GPT
+ partition table are generated. """
+
+ if primary:
+ primary_str = "primary"
+ else:
+ primary_str = "backup"
+
+ header = self.read_header(primary)
+ raw_ptable = self._read_raw_ptable(header)
+
+ for index in xrange(0, header['entries_cnt']):
+ start = header['entry_size'] * index
+ end = start + header['entry_size']
+ raw_entry = struct.unpack(_GPT_ENTRY_FORMAT, raw_ptable[start:end])
+
+ if raw_entry[2] == 0 or raw_entry[3] == 0:
continue
- part_name = str(entry[5].decode('UTF-16').split('\0', 1)[0])
+ part_name = str(raw_entry[5].decode('UTF-16').split('\0', 1)[0])
+
+ yield { 'index' : index,
+ 'offs' : header['ptable_offs'] + start,
+ 'type_uuid' : _stringify_uuid(raw_entry[0]),
+ 'part_uuid' : _stringify_uuid(raw_entry[1]),
+ 'first_lba' : raw_entry[2],
+ 'last_lba' : raw_entry[3],
+ 'flags' : raw_entry[4],
+ 'name' : part_name,
+ 'primary' : primary,
+ 'primary_str' : primary_str }
+
+ def _change_partition(self, header, entry):
+ """ A helper function for 'change_partitions()' which changes a
+ a paricular instance of the partition table (primary or backup). """
+
+ if entry['index'] >= header['entries_cnt']:
+ raise MountError("Partition table at LBA %d has only %d " \
+ "records cannot change record number %d" % \
+ (header['entries_cnt'], entry['index']))
+ # Read raw GPT header
+ raw_hdr = self._read_disk(header['hdr_offs'], _GPT_HEADER_SIZE)
+ raw_hdr = list(struct.unpack(_GPT_HEADER_FORMAT, raw_hdr))
+ _validate_header(raw_hdr)
+
+ # Prepare the new partition table entry
+ raw_entry = struct.pack(_GPT_ENTRY_FORMAT,
+ uuid.UUID(entry['type_uuid']).bytes_le,
+ uuid.UUID(entry['part_uuid']).bytes_le,
+ entry['first_lba'],
+ entry['last_lba'],
+ entry['flags'],
+ entry['name'].encode('UTF-16'))
+
+ # Write the updated entry to the disk
+ entry_offs = header['ptable_offs'] + \
+ header['entry_size'] * entry['index']
+ self._write_disk(entry_offs, raw_entry)
+
+ # Calculate and update partition table CRC32
+ raw_ptable = self._read_disk(header['ptable_offs'],
+ header['ptable_size'])
+ raw_hdr[13] = binascii.crc32(raw_ptable) & 0xFFFFFFFF
+
+ # Calculate and update the GPT header CRC
+ raw_hdr[3] = _calc_header_crc(raw_hdr)
+
+ # Write the updated header to the disk
+ raw_hdr = struct.pack(_GPT_HEADER_FORMAT, *raw_hdr)
+ self._write_disk(header['hdr_offs'], raw_hdr)
+
+ def change_partition(self, entry):
+ """ Change a GPT partition. The 'entry' argument has the same format as
+ 'get_partitions()' returns. This function simply changes the partition
+ table record corresponding to 'entry' in both, the primary and the
+ backup GPT partition tables. The parition table CRC is re-calculated
+ and the GPT headers are modified accordingly. """
+
+ # Change the primary partition table
+ header = self.read_header(True)
+ self._change_partition(header, entry)
- yield (_stringify_uuid(entry[0]), # 0. Partition type GUID
- _stringify_uuid(entry[1]), # 1. Partition GUID
- entry[2], # 2. First LBA
- entry[3], # 3. Last LBA
- entry[4], # 4. Attribute flags
- part_name) # 5. Partition name
+ # Change the backup partition table
+ header = self.read_header(False)
+ self._change_partition(header, entry)
diff --git a/mic/utils/misc.py b/mic/utils/misc.py
index 83ab8d6..e7dbf2f 100644
--- a/mic/utils/misc.py
+++ b/mic/utils/misc.py
@@ -267,6 +267,18 @@ def human_size(size):
mant = float(size/math.pow(1024, expo))
return "{0:.1f}{1:s}".format(mant, measure[expo])
+def get_block_size(file_obj):
+ """ Returns block size for file object 'file_obj'. Errors are indicated by
+ the 'IOError' exception. """
+
+ from fcntl import ioctl
+ import struct
+
+ # Get the block size of the host file-system for the image file by calling
+ # the FIGETBSZ ioctl (number 2).
+ binary_data = ioctl(file_obj, 2, struct.pack('I', 0))
+ return struct.unpack('I', binary_data)[0]
+
def check_space_pre_cp(src, dst):
"""Check whether disk space is enough before 'cp' like
operations, else exception will be raised.
diff --git a/mic/utils/partitionedfs.py b/mic/utils/partitionedfs.py
index a590e89..f1102d4 100644
--- a/mic/utils/partitionedfs.py
+++ b/mic/utils/partitionedfs.py
@@ -94,7 +94,8 @@ class PartitionedMount(Mount):
self.__add_disk(part['disk_name'])
def add_partition(self, size, disk_name, mountpoint, fstype = None,
- label=None, fsopts = None, boot = False, align = None):
+ label=None, fsopts = None, boot = False, align = None,
+ part_type = None):
""" Add the next partition. Prtitions have to be added in the
first-to-last order. """
@@ -146,6 +147,7 @@ class PartitionedMount(Mount):
'num': None, # Partition number
'boot': boot, # Bootable flag
'align': align, # Partition alignment
+ 'part_type' : part_type, # Partition type
'partuuid': None } # Partition UUID (GPT-only)
self.__add_partition(part)
@@ -174,6 +176,13 @@ class PartitionedMount(Mount):
raise MountError("No disk %s for partition %s" \
% (p['disk_name'], p['mountpoint']))
+ if p['part_type'] and ptable_format != 'gpt':
+ # The --part-type can also be implemented for MBR partitions,
+ # in which case it would map to the 1-byte "partition type"
+ # filed at offset 3 of the partition entry.
+ raise MountError("setting custom partition type is only " \
+ "imlemented for GPT partitions")
+
# Get the disk where the partition is located
d = self.disks[p['disk_name']]
d['numpart'] += 1
@@ -339,7 +348,8 @@ class PartitionedMount(Mount):
"%d" % p['num'], flag_name, "on"])
# If the partition table format is "gpt", find out PARTUUIDs for all
- # the partitions
+ # the partitions. And if users specified custom parition type UUIDs,
+ # set them.
for disk_name, disk in self.disks.items():
if disk['ptable_format'] != 'gpt':
continue
@@ -353,11 +363,18 @@ class PartitionedMount(Mount):
for n in d['partitions']:
p = self.partitions[n]
if p['num'] == pnum:
- # Found, assign PARTUUID
- p['partuuid'] = entry[1]
- msger.debug("PARTUUID for partition %d of disk '%s' " \
+ # Found, fetch PARTUUID (partition's unique ID)
+ p['partuuid'] = entry['part_uuid']
+ msger.debug("PARTUUID for partition %d on disk '%s' " \
"(mount point '%s') is '%s'" % (pnum, \
disk_name, p['mountpoint'], p['partuuid']))
+ if p['part_type']:
+ entry['type_uuid'] = p['part_type']
+ msger.debug("Change type of partition %d on disk " \
+ "'%s' (mount point '%s') to '%s'" % \
+ (pnum, disk_name, p['mountpoint'],
+ p['part_type']))
+ gpt_parser.change_partition(entry)
del gpt_parser
@@ -431,6 +448,10 @@ class PartitionedMount(Mount):
raise MountError("Failed to map partitions for '%s'" %
d['disk'].device)
+ # FIXME: there is a bit delay for multipath device setup,
+ # wait 10ms for the setup
+ import time
+ time.sleep(10)
d['mapped'] = True
def __unmap_partitions(self):
diff --git a/packaging/mic.changes b/packaging/mic.changes
index fe0b2d9..06768ac 100644
--- a/packaging/mic.changes
+++ b/packaging/mic.changes
@@ -1,3 +1,13 @@
+* Thu May 16 2013 Gui Chen <gui.chen@intel.com> - 0.19
+ - new distribution support: Ubuntu 13.04 and openSUSE 12.3
+ - introduce '--part-type' to handle GPT partition
+ - copy bmap creation from bmap-tools
+ - update some depends and fix depends issue
+ - bug fix:
+ - fix bug autologinuser always set
+ - fix symlink bind mount left issue
+ - fix '/var/lock' non-existent throw traceback
+
* Mon Apr 03 2013 Gui Chen <gui.chen@intel.com> - 0.18
- put build_id before image name for release option
- mount build directory as tmpfs to speed up
diff --git a/packaging/mic.dsc b/packaging/mic.dsc
index 29eeaa9..bf39ca0 100644
--- a/packaging/mic.dsc
+++ b/packaging/mic.dsc
@@ -2,7 +2,7 @@ Format: 1.0
Source: mic
Binary: mic
Architecture: all
-Version: 0.18.1
+Version: 0.19
Maintainer: Jian-feng Ding <jian-feng.ding@intel.com>
Homepage: http://www.tizen.org
Standards-Version: 3.8.0
diff --git a/packaging/mic.spec b/packaging/mic.spec
index d350c0a..926c44c 100644
--- a/packaging/mic.spec
+++ b/packaging/mic.spec
@@ -2,7 +2,7 @@
Name: mic
Summary: Image Creator for Linux Distributions
-Version: 0.18.1
+Version: 0.19
Release: 1
Group: System/Base
License: GPLv2
@@ -36,8 +36,10 @@ Requires: btrfs-progs
%if 0%{?suse_version}
Requires: squashfs >= 4.0
+Requires: python-m2crypto
%else
Requires: squashfs-tools >= 4.0
+Requires: m2crypto
%endif
%if 0%{?fedora_version} || 0%{?centos_version}
@@ -50,7 +52,7 @@ Requires: qemu-linux-user
Requires: qemu-arm-static
%endif
-Requires: python-zypp
+Requires: tizen-python-zypp
BuildRequires: python-devel
%if ! 0%{?tizen_version:1}