Image Storage and Indexing for Machine Vision Images
Every Software Engineer needs a hobby – to this end I have been toying with an idea for the last while.
There are many machine vision and computer vision applications that capture images from cameras and store them on disk. These applications can generate so many images that working with them can be quite difficult. For example consider an application that acquires from two cameras each acquiring at 30 frames per second – this application will save 216K images per hour, a 5 hour run would generate 1 million images!
Very often the images will be stored on a file system (local or networked) in some sort of hierarchical directory structure. Using a file system is a very efficient way of storing images, database systems (Relational or NoSQL) don’t offer many advantages and indeed can have associated disadvantages.
But how can we effectively work with so many images, we have possibly millions of images sitting in a set of directories, how can we interact with them and efficiently and query them based on attributes that are interest to us so the we can perform more analysis?
For example consider this set of (contrived) image queries:
Give me all of the images:
+ from camera 1
+ from camera 1 acquired on Sunday between 13:00 and 13:10
+ whose file size > 1MB
+ acquired within 100 meters of this GPS location
+ that have an average brightness > 63 Grey levels
Some people have attacked this image query problem by using a relational database to store image meta-data, if designed well this can allow for efficient image retrieval, however it seems to me that a schema-less approach is a better fit for images with dynamic attributes and I like the idea of not being tied down to any particular database technology and all of the baggage that comes with it.
So my idea is to start out on the road of implementing (for fun) a simple image indexing system for rather large sets of images, it will have an associated tool set, API and maybe even a query language in the future.
The system will:
Allow indexing of large numbers of images in arbitrary hierarchical directory structures
Index images based on standard attributes such as:
+ Acquisition Date/Time
+ Name
+ Source (e.g. camera)
+ Type
+ Size
+ Bit Depth
+ Exif Data, e.g.:
–> Location
–> Author
–> Acquisition parameters (aperture, exposure time etc.)
+ Etc.
Index images optionally based on Computer Vision metrics, e.g.
+ Brightness
+ Sharpness
+ Etc.
Allow users to define their own attributes for indexing, e.g.:
+ Define image attributes based on an OpenCV algorithm
+ Define attributes based on the contents of the image fie name.
The system will:
+ Have no dependencies on technologies such as Database systems etc.
+ Be cross platform
To get the ball rolling and so that we can say the first sod has been turned, here is some (naive) python which scans a directory tree of images and creates a flat CSV file of the image name, path and size:
#!/usr/bin/python import argparse import fnmatch import os import time parser = argparse.ArgumentParser() parser.add_argument("-p", "--path", help="The root path to the images directory tree") args = parser.parse_args() path = args.path print 'looking in ' + path ii = 0 start = time.time() with open('%s/.flat' % path, 'w') as out: for root, _, filenames in os.walk(path): for name in fnmatch.filter(filenames, '*.jpg'): p = os.path.relpath(root, path) (mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime) = os.stat(os.path.join(root, name)) f = {'name': name, 'path': p, 'size': size} out.write("i,%s,%s,%d\n" % (name, p, size)) ii += 1 if ii % 1000 == 0: print "Reading %d" % ii duration = time.time() - start print '%d images indexed in %d seconds, %d images/s' % (ii, duration, ii / duration)
Run it like this:
scanner.py --path "images\Run1\ccm17"
Once the directory tree has been walked and the CSV file generated we can use the following script to query images:
#!/usr/bin/python import argparse import os import csv parser = argparse.ArgumentParser() parser.add_argument("-p", "--path", help="The root path to the images directory tree") parser.add_argument("-w", "--where", help="The where value") args = parser.parse_args() path = args.path print 'looking in ' + path code = compile(args.where, '', 'eval') images = [] index = (os.path.join(path, '.flat')) jindex = (os.path.join(path, '.flat.json')) print('opening index' + index) class Image: def __init__(self, name, size, path): self.name = name self.size = size self.path = path with open(index) as csvfile: spamreader = csv.reader(csvfile, delimiter=',', quotechar='|') for row in spamreader: images.append(Image(row[1], row[3], row[2])) print 'index loaded' for image in images: if eval(code): print('%s %s' % (image.name, image.size))
This allows us to run queries like this:
select.py --path "images\Run1\ccm17" --where "'_43' in image.name and image.size > 76000"
This will quickly list the images whose file size > 76000 bytes and whose name contains ‘_43’
This is a really simple first step but it does demonstrate how even a flat ‘index’ of attributes can be of great use.
Next Step:
+ Add more image attributes to the CSV file