Image Storage and Indexing for Machine Vision Images

Every Software Engineer needs a hobby – to this end I have been toying with an idea for the last while.

There are many machine vision and computer vision applications that capture images from cameras and store them on disk. These applications can generate so many images that working with them can be quite difficult. For example consider an application that acquires from two cameras each acquiring at 30 frames per second – this application will save 216K images per hour, a 5 hour run would generate 1 million images!

Very often the images will be stored on a file system (local or networked) in some sort of hierarchical directory structure. Using a file system is a very efficient way of storing images, database systems (Relational or NoSQL) don’t offer many advantages and indeed can have associated disadvantages.

But how can we effectively work with so many images, we have possibly millions of images sitting in a set of directories, how can we interact with them and efficiently and query them based on attributes that are interest to us so the we can perform more analysis?

For example consider this set of (contrived) image queries:

Give me all of the images:

+ from camera 1
+ from camera 1 acquired on Sunday between 13:00 and 13:10
+ whose file size > 1MB
+ acquired within 100 meters of this GPS location
+ that have an average brightness > 63 Grey levels

Some people have attacked this image query problem by using a relational database to store image meta-data, if designed well this can allow for efficient image retrieval, however it seems to me that a schema-less approach is a better fit for images with dynamic attributes and I like the idea of not being tied down to any particular database technology and all of the baggage that comes with it.

So my idea is to start out on the road of implementing (for fun) a simple image indexing system for rather large sets of images, it will have an associated tool set, API and maybe even a query language in the future.

The system will:

Allow indexing of large numbers of images in arbitrary hierarchical directory structures

Index images based on standard attributes such as:

+ Acquisition Date/Time
+ Name
+ Source (e.g. camera)
+ Type
+ Size
+ Bit Depth
+ Exif Data, e.g.:
–> Location
–> Author
–> Acquisition parameters (aperture, exposure time etc.)
+ Etc.

Index images optionally based on Computer Vision metrics, e.g.
+ Brightness
+ Sharpness
+ Etc.

Allow users to define their own attributes for indexing, e.g.:
+ Define image attributes based on an OpenCV algorithm
+ Define attributes based on the contents of the image fie name.

The system will:

+ Have no dependencies on technologies such as Database systems etc.
+ Be cross platform

To get the ball rolling and so that we can say the first sod has been turned, here is some (naive) python which scans a directory tree of images and creates a flat CSV file of the image name, path and size:

Run it like this:

Once the directory tree has been walked and the CSV file generated we can use the following script to query images:

This allows us to run queries like this:

This will quickly list the images whose file size > 76000 bytes and whose name contains ‘_43’

This is a really simple first step but it does demonstrate how even a flat ‘index’ of attributes can be of great use.

Next Step:

+ Add more image attributes to the CSV file