Posts

Code to test u-blox Binary GPS Packet Checksum

Here’s something random for a Thursday, the following is some simple C++ code for checking the checksum of a u-blox binary GPS packet, for some reason we get quite a few packet data errors, so it turns out that it is important to check the checksum!

There must be an unwritten (or written?) software engineering rule which states that you should always check a checksum if one’s provided??!?

Note: Make sure to pass only complete packets to this function, it assumes it has everything to work with!

Geotag – EXIF GPS Latitude field format with libEXIF

I have been developing some software to geotag jpeg images by adding EXIF GPS information using libEXIF. This is very handy as loads of applications like GIS systems and google maps etc can correctly geographically position your images.

As usual I started in the middle rather than starting at the beginning and got a bit confused by how GPS latitude and longitude fields are specified in EXIF, so I decided to try to describe it here with pictures (so that I can test out the google drawing app).

So, latitude (and longitude) can be expressed in different ways bit it is essentially just an angle. Common ways of expressing these angles are:

Degrees, minutes & seconds (with decimal places)
N 52 58 40.44

Degrees & minutes (with decimal places)
N 52 58.674

Degrees (with decimal places)
52.97790

The EXIF latitude field allows you to specify the angle in all of these forms, it is made up of 3 parts as follows:

1.) Degrees – Rational (8 bytes)
2.) Minutes – Rational (8 bytes)
3.) Seconds – Rational (8 bytes)

Each part is an EXIF Rational, it is hard to find a description of its format, but a an EXIF rational contains two 4-byte words and is like a fraction. The first word specifies the value’s magnitude while the second denominates the units. Consider the following values, (where ‘/’ should be read as ‘over’ or ‘divided by’):

a.) 52 = 52 / 1 (52 units)
b.) 40.44 = 4044 / 100 (4044 hundredths)
c.) 52.97790 = 52977900 / 1000000 (52977900 millionths)
d.) 0 = 0/1

the last value ( 0/1 ) is handy as it allows us to specify, say, 0 seconds if we only want to provide degrees and fractional minutes.

To set a rational we can use libEXIF’s set_rational() function like this:

or more generally, if, for example, you want to set a value to 6 decimal places:

So now, given a latitude value in degrees, minutes and seconds all we have to do is create a EXIF_TAG_GPS_LATITUDE tag and add a rational for each. Imagine that we want to encode 52, 58, 44.44 then the tag data will then end up looking like this:

efix gps latitude format degrees minutes seconds

This is all very well but I don’t normally bother holding minutes and seconds in my code, instead I preferr to use a degree value to many decimal places, e.g. 52.97790, no problem, this is where our rational value 0 / 1 comes in handy – it can be represented as follows:

exif gps latitude format decimal degrees

So wrapping all of this up, here is some example code that sets a decimal degree value for latitude:

I will probably do another post that details how to write EXIF data into a jpeg image’s header using libEXIF and libJPEG

The google drawing app actually worked quite well!

Algorithm to calculate speed from two GPS latitude and longitude points and time difference

I found this good description of how to calculate an approximate speed over ground given two latitude and longitude coordinates and a time difference:

http://answers.yahoo.com/question/index?qid=20110325075640AADHGXI

It involves first plotting your two GPS points on a spherical model of the earth, calculating the angle between them using a dot product, calculating distance using this angle and the earth’s radius and finally dividing by the elapsed time to approximate the speed.

Thanks to bpiguy for the answer!

Now I just have to implement it….

Update:

I finally got around to writing some code to play with this distance calculation:

This function takes the latitude and longitude in signed decimal format and returns the distance in metres, I have left in the ‘r’s for clarity but if efficiency is what you’re after then they can be removed as the original post suggests…

Now once you have the distance between the points you can estimate the average speed by dividing this distance by the time between the two position measurements like this:

This code assumes that p1 and p2 represent the first and second measured GPS positions and that the time-stamp recorded at each is enumerated in milliseconds, it calculates both metres per second and kilometres per hour. It is important to note that this is only an estimate of the average speed between the two points and its accuracy will depend on various factors including the distance and time elapsed between the two GPS measurements.