Posts

NTP with GPS and PPS working without network clock

Is it possible to have a PPS disciplined NTP server without a network reference clock or network connection?

While doing some camera synchronisation and timestamp testing, I setup a Raspberry Pi NTP server with PPS using an Adafruit GPS hat so that I could sync my cameras’ time to NTP, trigger them using the PPS signal, generate UTC timestamps and check the timestamps against the expected UTC second boundaries.

As part of this the question once again arose – Can I configure an NTP server with GPS time and PPS without having to configure a network reference clock along with the GPS and PPS sources?  This was important as my test setup didn’t have access to an external network, and yet I wanted typical NTP offsets of < 100us.

Well with quite a bit of messing I finally got it working, at this stage there are lots of pages that detail how to setup a Pi as an NTP server with PPS, some mention that you must have a network reference and some mention that you don’t – I had to do a lot of digging and a lot of messing to get to the bottom of things.

Amusingly, NTP configuration is something that ChatGPT is simply terrible at – don’t use it for this, it will just waste your time!

I had lots of problems, but the 2 most persistent was:

  1. The PPS source being marked as a false ticker (with an x to its left in the ntpq -p output)
  2. The PPS source not being referenced at all, “when” would remain at “-“

For ages, I lurched between problems 1 and 2…

Eventually I got it working for my setup.  The trick to get it working for me is that:

  • I needed 3 reference clocks in my setup for the PPS source to be used: PPS time, GPS time and a 3rd reference, normally this is a network NTP server but I found that configuring the local clock (127.127.1.0) worked instead.
  • I needed to be careful with my stratum settings, what worked for me was to put the local clock at stratum 15, GPS at stratum 10 and PPS at stratum 0.
  • Marked the PPS source as prefer (and not the GPS source as is often recommended?)

My working configuration:

tos mindist 0.500
# Local Clock
server 127.127.1.0
fudge 127.127.1.0 stratum 15
# GPS/NMEA Source from Hat
server 127.127.28.0 minpoll 4 maxpoll 4 prefer iburst
fudge 127.127.28.0 time1 0.000 refid GPS stratum 10
# PPS Source, stratum 0 and prefer
server 127.127.22.0 minpoll 4 maxpoll 4
fudge 127.127.22.0 refid PPS stratum 0 flag3 1 flag4 1 prefer

I deliberately left the time offset for the GPS source at 0.0 rather than trying to chase down the time packet offset so that I could see an “honest” offset value in the ntpq output:

 remote refid st t when poll reach delay offset jitter
==============================================================================
LOCAL(0) .LOCL. 15 l 58m 64 0 0.000 0.000 0.000
*SHM(0) .GPS. 10 l 16 16 377 0.000 -106.45 1.145
oPPS(0) .PPS. 0 l 15 16 377 0.000 -0.001 0.001

Code to test u-blox Binary GPS Packet Checksum

Here’s something random for a Thursday, the following is some simple C++ code for checking the checksum of a u-blox binary GPS packet, for some reason we get quite a few packet data errors, so it turns out that it is important to check the checksum!

There must be an unwritten (or written?) software engineering rule which states that you should always check a checksum if one’s provided??!?

Note: Make sure to pass only complete packets to this function, it assumes it has everything to work with!

//
//
bool is_checksum_ok(unsigned char* buf) {
	// Packet format:
	// SYNC<1> SYNC<1> CLASS<1> ID<1> LENGTH<2> PAYLOAD CHKA<1> CHKB<1>
	// --------------- 4 ----------->|
	// --------------- 6 --------------------->|
	// --------------- 6 + length ---------------------------->|
	// Payload length field is little-endian (bless)
	unsigned short length = buf[5] << 8 | buf[4];
	unsigned char a = 0;
	unsigned char b = 0;
	// We calculate the checksum over the entire packet _except_
	// for the 2 bytes at the beginning and the 2 checksum bytes at
	// the end.
	for (unsigned short i = 2; i < 6 + length; ++i) {
		a += buf[i];
		b += a;
	}
	// Pull out checksum bytes
	unsigned char ra = buf[6 + length];
	unsigned char rb = buf[6 + 1 + length];
	// and compare
	return a = ra && b == rb;
}
//
//

Geotag – EXIF GPS Latitude field format with libEXIF

I have been developing some software to geotag jpeg images by adding EXIF GPS information using libEXIF. This is very handy as loads of applications like GIS systems and google maps etc can correctly geographically position your images.

As usual I started in the middle rather than starting at the beginning and got a bit confused by how GPS latitude and longitude fields are specified in EXIF, so I decided to try to describe it here with pictures (so that I can test out the google drawing app).

So, latitude (and longitude) can be expressed in different ways bit it is essentially just an angle. Common ways of expressing these angles are:

Degrees, minutes & seconds (with decimal places)
N 52 58 40.44

Degrees & minutes (with decimal places)
N 52 58.674

Degrees (with decimal places)
52.97790

The EXIF latitude field allows you to specify the angle in all of these forms, it is made up of 3 parts as follows:

1.) Degrees – Rational (8 bytes)
2.) Minutes – Rational (8 bytes)
3.) Seconds – Rational (8 bytes)

Each part is an EXIF Rational, it is hard to find a description of its format, but a an EXIF rational contains two 4-byte words and is like a fraction. The first word specifies the value’s magnitude while the second denominates the units. Consider the following values, (where ‘/’ should be read as ‘over’ or ‘divided by’):

a.) 52 = 52 / 1 (52 units)
b.) 40.44 = 4044 / 100 (4044 hundredths)
c.) 52.97790 = 52977900 / 1000000 (52977900 millionths)
d.) 0 = 0/1

the last value ( 0/1 ) is handy as it allows us to specify, say, 0 seconds if we only want to provide degrees and fractional minutes.

To set a rational we can use libEXIF’s set_rational() function like this:

//
// 40.44 = 4044 / 100 (4044 hundredths)
exif_set_rational(entry->data, EXIF_BYTE_ORDER_INTEL, { 4044, 100 });
//

or more generally, if, for example, you want to set a value to 6 decimal places:

//
float lat = 52.977900;
exif_set_rational(entry->data, FILE_BYTE_ORDER, { (unsigned)(lat * 1000000.0), 1000000 });
//

So now, given a latitude value in degrees, minutes and seconds all we have to do is create a EXIF_TAG_GPS_LATITUDE tag and add a rational for each. Imagine that we want to encode 52, 58, 44.44 then the tag data will then end up looking like this:

efix gps latitude format degrees minutes seconds

This is all very well but I don’t normally bother holding minutes and seconds in my code, instead I preferr to use a degree value to many decimal places, e.g. 52.97790, no problem, this is where our rational value 0 / 1 comes in handy – it can be represented as follows:

exif gps latitude format decimal degrees

So wrapping all of this up, here is some example code that sets a decimal degree value for latitude:

/*
 * create_tag() is from the write-exif.c sample code that is floating
 * around the interweb - with thanks to whoever created it!
 */
/* Create a brand-new tag with a data field of the given length, in the
 * given IFD. This is needed when exif_entry_initialize() isn't able to create
 * this type of tag itself, or the default data length it creates isn't the
 * correct length.
 */
static ExifEntry *create_tag(ExifData *exif, ExifIfd ifd, ExifTag tag, size_t len)
{
	void *buf;
	ExifEntry *entry;
	/* Create a memory allocator to manage this ExifEntry */
	ExifMem *mem = exif_mem_new_default();
	/* Create a new ExifEntry using our allocator */
	entry = exif_entry_new_mem (mem);
	/* Allocate memory to use for holding the tag data */
	buf = exif_mem_alloc(mem, len);
	/* Fill in the entry */
	entry->data = buf;
	entry->size = len;
	entry->tag = tag;
	entry->components = len;
	entry->format = EXIF_FORMAT_UNDEFINED;
	/* Attach the ExifEntry to an IFD */
	exif_content_add_entry (exif->ifd[ifd], entry);
	/* The ExifMem and ExifEntry are now owned elsewhere */
	exif_mem_unref(mem);
	exif_entry_unref(entry);
	return entry;
}
// Set a decimal degree value with support for 6 decimal places
//
//
  // create our latitude tag, the whole  field is 24 bytes long
  entry = create_tag(exif, EXIF_IFD_GPS, EXIF_TAG_GPS_LATITUDE, 24);
  // Set the field's format and number of components, this is very important!
  entry->format = EXIF_FORMAT_RATIONAL;
  entry->components = 3;
  // Degrees
  float lat = 52.977900;
  exif_set_rational(entry->data, EXIF_BYTE_ORDER_INTEL, { (unsigned)(lat * 1000000.0), 1000000 });
//
//
    

I will probably do another post that details how to write EXIF data into a jpeg image’s header using libEXIF and libJPEG

The google drawing app actually worked quite well!

Algorithm to calculate speed from two GPS latitude and longitude points and time difference

To estimate average speed of travel between two close GPS points.

A relatively simple method involves treating the globe as a sphere with a radius of 6378100 metres and calculating positions on this sphere for the two locations, of course the earth isn’t a sphere, but as we are aiming to calculate the speed of travel between 2 quite close points we should be OK.

First we plot the two GPS points on a spherical model of the earth, then calculate the angle between them using a dot product, then calculate the ‘Great Circle’ distance using this angle and the earth’s radius and finally we divide by the elapsed time to approximate the speed.

Here is some code to demonstrate the calculation.

This function takes the latitude and longitude in signed decimal format and returns the distance in metres, I have left in the ‘r’s for clarity but if efficiency is what you’re after then they can be removed.

//
//
double distance_on_geoid(double lat1, double lon1, double lat2, double lon2) {
    // Convert degrees to radians
    lat1 = lat1 * M_PI / 180.0;
    lon1 = lon1 * M_PI / 180.0;
    lat2 = lat2 * M_PI / 180.0;
    lon2 = lon2 * M_PI / 180.0;
    // radius of earth in metres
    double r = 6378100;
    // P
    double rho1 = r * cos(lat1);
    double z1 = r * sin(lat1);
    double x1 = rho1 * cos(lon1);
    double y1 = rho1 * sin(lon1);
    // Q
    double rho2 = r * cos(lat2);
    double z2 = r * sin(lat2);
    double x2 = rho2 * cos(lon2);
    double y2 = rho2 * sin(lon2);
    // Dot product
    double dot = (x1 * x2 + y1 * y2 + z1 * z2);
    double cos_theta = dot / (r * r);
    double theta = acos(cos_theta);
    // Distance in Metres
    return r * theta;
}

Now once you have the distance between the points you can estimate the average speed by dividing this distance by the time between the two position measurements, something like this:

 

//
//
auto dist = distance_on_geoid(p1.latitude, p1.longitude, p2.latitude, p2.longitude);
// timestamp is in milliseconds
auto time_s = (p2.timestamp - p1.timestamp) / 1000.0;
double speed_mps = dist / time_s;
double speed_kph = (speed_mps * 3600.0) / 1000.0;

This code assumes that p1 and p2 represent the first and second measured GPS positions and that the time-stamp recorded at each is enumerated in milliseconds, it calculates both metres per second and kilometres per hour. It is important to note that this is only an estimate of the average speed between the two points and its accuracy will depend on various factors including the distance and time elapsed between the two GPS measurements.