Introduction to UST and UST/MSC

By Chris Pirazzi. Some material stolen from Wiltse Carpenter, Doug Cook, Bryan James, and Bruce Karsh.

This document describes the UST support in the VL, AL, GL, CL, MD, tserialio, DM, and other libraries (see What Are the SGI Video-Related Libraries?). 

What Is It?

SGI's UST support lets you measure the time at which signals arrived at an input jack of the machine, and schedule the time at which your data will hit an output jack of the machine. You can use this support to: In many cases you will be able to synchronize signals to an accuracy of tens of microseconds.

 For a taste of why synchronization is hard and how the UST support solves the problem, see Why Use UST?.

Accuracy vs. Latency

The UST support has no interactions with the UNIX process scheduler. Therefore, the UST support will not affect your program's latency: You can affect these latencies in the average case using the tricks described in Seizing Higher Scheduling Priority.

The UST Clock

Every SGI machine has a UST clock. UST, which means "Universal System Time" or "Unadjusted System Time:" Every component of the system (your program, audio subsystems, video subsystems, serial subsystems, etc.) can snap the value of UST clock.

How to Use UST

You can read the UST clock directly from your program with dmGetUST(). However, it's much more useful to get USTs from the same library you're using to input or output data:
 
  The point within an audio frame, video field, MIDI event, etc. which the UST refers to is called its synchronization point. For video and graphics signals, this is defined in videosync(3dm). For serial bytes, this is the half amplitude point of the leading edge of the byte's start bit. For MIDI events, this is the synchronization point of the first serial byte of the MIDI message (except sysex messages, which the MIDI library currently does not accurately timestamp). For analog audio signals, this is the instant at which the corresponding voltage level is sampled or produced. For digital audio signals, this is the edge of the recovered sample clock from the input signal or the driving sample clock for the output signal.

 You can perform all of the timing and synchronization tasks listed above by simply comparing USTs you get from or give to different subsystems. For example:

UST Timestamping in the MIDI Library

The MIDI library represents MIDI events with this structure:
 
 
typedef struct __mdevent {
        char msg[4];
        char *sysexmsg;
        long long stamp;            /* time stamp in ns */
        int msglen;                 /* length of data, sysex only */
} MDevent;
The MDevent contains a MIDI event and a field "stamp" that identifies the time of that event. The library supports lots of different stamp modes that affect the interpretation of "stamp," but the one you want to use is the simplest one:
{
  MDport port;
  /* ... open port ... */
  mdSetStampMode(port, MD_RELATIVESTAMP);
  mdSetStartPoint(port, 0, 0);
  /* ... send or receive MIDI ... */
}
In this mode, "stamp" is simply a UST time.

 To receive data, you call mdReceive() and you get back an MDevent with the event and UST stamp filled in. To send data, you provide an MDevent to mdSend(), and the MIDI subsystem will output the specified MIDI message at the specified UST.

UST in the Timestamped Serial I/O Library

The tserialio(3) library (currently supported on IRIX 6.3) lets you measure the time at which bytes arrive at a serial port to within ±1ms of accuracy, and lets you schedule serial bytes for output to within ±1ms accuracy. When you read serial bytes with tsRead(3), you receive each byte paired with its UST. When you write serial bytes with tsWrite(3), you specify the UST at which you want each byte to go out the serial port.

UST Timestamping for Video Library Input

When you receive a field or frame in the classic VL API using vlGetNextValid(), you can pass the returned VLInfoPtr into vlGetDMediaInfo(). This returns a DMediaInfo structure. DMediaInfo.ustime contains the UST of that field or frame. DMediaInfo.sequence contains a coincident snap of a counter that increments once per video field time. You can tell if you dropped any input data by seeing if DMediaInfo.sequence increments by more than 1 (if the time spacing of your video buffers is a field time) or 2 (if the time spacing of your buffers is a frame time).

 When you receive a field or frame in a DMbuffer using the O2 or cross-platform VL buffering API (see What Are the SGI Video-Related Libraries? for more info), you can pass the DMbuffer into dmBufferGetUSTMSCpair(). This returns a USTMSCpair structure. USTMSCpair.ust contains the UST of the field or frame. USTMSCpair.msc contains a coincident snap of a counter that increments once per buffer time (field or frame, depending on the time spacing of the data in your buffers). You can tell if you dropped any data by seeing if USTMSCpair.msc has incremented by more than 1. This MSC counter has some useful properties which we describe below along with UST/MSC.

 The VL also supports UST/MSC, another API for determining the UST of data which works for input and output. We describe UST/MSC below.

 We will explain how to distinguish dominant and non-dominant fields along with UST/MSC below.
 
 

UST Timestamping in Other Libraries

The Image Conversion library dmIC (part of libdmedia) lets you pass DMbuffers into a converter. The DMbuffers which exit the converter will have an identical USTMSCpair (retrievable with dmBufferGetUSTMSCpair()) to the corresponding input DMbuffer.

UST and gettimeofday()

You can use dmGetUSTCurrentTimePair() to get a UST and a struct timeval which represent the same instant of time. Unlike UST, the clock returned by gettimeofday() may occasionally be adjusted forwards or backwards by system administrators and network time daemons. For this reason and other reasons given in Clocking and Clock Drift: Relating Multiple MSC or UST Clocks below, you'll probably want to call this function periodically to maintain a fresh correlation between UST and gettimeofday().

 This function provides one possible way to relate the UST clocks of two different machines. See Synchronization Across Machines for more information.
 
 

UST for Sampled Data Types: UST/MSC

UST/MSC is a way to get at UST for sampled data types like audio and video. These data types have a certain unit (an audio frame, a video field or frame) which repeats at a regular interval. The input or output of each unit is driven not by your software, but by an electrical oscillator in your machine or in some external piece of equipment. For example, a video output device which is genlocked to an external signal must precisely align each outgoing field with those in the external signal.

 As a result of this, you can think of your signal as having discrete "slots" which each hold one unit of data. Here's an example with an audio and video signal:
 
 

The UST of a slot is the UST of the synchronization point of the data in that slot (see How to Use UST above). Each slot in a given signal is numbered with an MSC ("media stream count" or "media sequence count").

 If your program inputs a sampled signal, your job is to read the data out of each slot. If your program outputs a sampled signal, your job is to place the right data into each slot. In both cases, you need a way to figure out the UST of any given slot.

 The UST/MSC support in the AL and VL consists of two operations which let you do this:
 
 

The first operation links your data to an MSC. The second operation links an MSC to a UST. You can use both operations to get from your data to UST. This works even if the MSC in the pair differs from the MSC whose UST you want, because you know how the slots are spaced in time (the audio sampling rate, the video frame/field rate). For example, to determine the UST of the next piece of data you're going to read or write (that is, the frontier MSC), do this:
{
  double ust_per_msc = get_ust_per_msc();
  USTMSCpair pair = get_ust_msc_pair();
  stamp_t frontier_msc = get_frontier_msc();
  
  /* step 1: figure out which MSC you want a UST for */

  stamp_t desired_msc = frontier_msc;

  /* step 2: compute the UST of that MSC */

  stamp_t desired_ust = pair.ust + ust_per_msc*(desired_msc - pair.msc);
}
When extrapolating from a UST/MSC pair as above, be careful to use sufficiently precise C types which will not overflow.

Determining the ust_per_msc figure can sometimes be tricky, because of various idiosyncrasies of audio hardware, video hardware, and video itself. Each library provides a simple way to get a nominal ust_per_msc figure. For any UST/MSC device, you can also measure ust_per_msc empirically instead of computing the nominal ust_per_msc figure. This often has accuracy advantages and works for any sync source. We will describe this more fully in Clocking and Clock Drift: Relating Multiple MSC or UST Clocks below.

 You don't have to compute ust_per_msc or get a UST/MSC pair every time you read or write data. Typically, applications compute ust_per_msc only once, and get a new UST/MSC pair every second or so.

 MSC values you get from a particular AL port or VL path are specific to that port/path only. For example, if you input the same video signal using two different VL paths, you must not assume that MSC 123 on one path refers to the same video field/frame as MSC 123 on the other path. When you need to compare the time of data on two different paths or ports, convert to UST.

 After giving the specifics of UST/MSC for each library, we will provide some pictorial examples of the above pseudocode.
 
 

UST/MSC in the Audio Library

The AL UST/MSC support is useful for paths which bring audio data into memory and out of memory (ie, not for monitoring paths):
 
 
type of data in each slot (MSC) one audio frame
time spacing of each slot (MSC) one audio frame period
read (dequeue) alReadFrames()
write (enqueue) alWriteFrames()
get frontier MSC alGetFrameNumber()
get UST/MSC pair alGetFrameTime()
get nominal UST per MSC sample code below
Below we will provide many pictorial examples of using the AL's UST/MSC support.

 To get a nominal ust_per_msc figure, do this:
 
 

{
  ALpv pv;
  double samprate;
  double ust_per_msc;
  
  pv.param = AL_RATE;
  if (alGetParams(res, &pv, 1) < 0 || pv.value.ll <= 0)
    {
      /* cannot determine nominal sample-rate */
    }
  else
    {
      samprate = alFixedToDouble(pv.value.ll);
      ust_per_msc = 1.E9 / samprate;
    }
}
The "res" parameter to alGetParams() is the resource id of your AL clock generator, your AL device that uses that clock generator, or your AL port that uses that device.

 The alGetParams() call will succeed but return a negative sampling rate if the nominal sample-rate cannot be determined. This only happens if the AL device's clock generator's master clock is an AES or ADAT digital signal which does not contain rate information (e.g. an AES source which has not set the rate bits in the subcode).

 You can measure ust_per_msc instead, which works in every case and gives you a more accurate figure. This will be described in Clocking and Clock Drift: Relating Multiple MSC or UST Clocks below.
 
 

UST/MSC in the Video Library

The VL UST/MSC support is useful for paths which have a VL_MEM node which brings video data into memory or out of memory:
 
 
type of data in each slot (MSC) one buffer entry (VLInfoPtr or DMbuffer)
time spacing of each slot (MSC) VL_RATE (which is in buffer entries per second)
read (dequeue) vlGetNextValid() or vlEventRecv()
write (enqueue) vlPutValid() or vlDMBufferSend()
get frontier MSC vlGetFrontierMSC()
get UST/MSC pair vlGetUSTMSCPair()
get nominal UST per MSC vlGetUSTPerMSC()
On older devices (pre DIVO) VL UST/MSC requires that VL_RATE is set to the maximum rate for the current video timing and cap type:
 
 
VL_TIMING VL_CAP_TYPE required VL_RATE
525-line VL_CAPTURE_NONINTERLEAVED or 
VL_CAPTURE_FIELDS
60/1
VL_CAPTURE_INTERLEAVED 30/1
VL_CAPTURE_EVEN_FIELDS 30/1
VL_CAPTURE_ODD_FIELDS 30/1
625-line VL_CAPTURE_NONINTERLEAVED or 
VL_CAPTURE_FIELDS
50/1
VL_CAPTURE_INTERLEAVED 25/1
VL_CAPTURE_EVEN_FIELDS 25/1
VL_CAPTURE_ODD_FIELDS 25/1
On older devices, the MSC increases by 1 for each slot in the stream to/from your application:
 
Timing VL_CAP_TYPE MSC rate MSC increment 
per buffer
Interlaced VL_CAPTURE_NONINTERLEAVED or 
VL_CAPTURE_FIELDS
once per 
field
1
VL_CAPTURE_INTERLEAVED once per 
2 fields
1
VL_CAPTURE_EVEN_FIELDS or 
VL_CAPTURE_ODD_FIELDS.
once per 
2 fields
1
Progressive not supported
On newer devices (Divo, HDIO, and those released after 1999),  MSC increases by one for each slot in the stream in/out of the machine:
 
Timing VL_CAP_TYPE MSC rate MSC increment 
per buffer
Interlaced VL_CAPTURE_NONINTERLEAVED or 
VL_CAPTURE_FIELDS
once per 
field
1
VL_CAPTURE_INTERLEAVED or 
VL_CAPTURE_EVEN_FIELDS or 
VL_CAPTURE_ODD_FIELDS.
once per 
field
2
Progressive VL_CAPTURE_INTERLEAVED once per 
frame
1
In addition to its UST/MSC API, the VL has a conceptually simpler UST timestamping scheme for video input, which we described in UST Timestamping for Video Library Input above. Consider using UST timestamping instead of UST/MSC if your application only does video input. If you use UST timestamping, don't forget that DMediaInfo.sequence (classic VL API) is always in video field times and cannot be compared with MSCs, whereas USTMSCpair.msc (O2/cross-platform VL API) has the same units and offset as MSCs you get from the UST/MSC API. USTMSCpair.msc and the frontier MSC may deviate if your input buffer overflows, as described in UST/MSC: Using the Frontier MSC to Detect Errors.
 
 

Dominant/Non-Dominant and F1/F2

For VL_CAPTURE_NONINTERLEAVED paths, each buffer contains either a dominant or non-dominant field (see Definitions: F1/F2, Interleave, Field Dominance, and More). The first field you dequeue on input or enqueue on output after beginning a VL transfer will be a dominant field. Subsequent fields you enqueue or dequeue will alternate between non-dominant and dominant. Therefore, if the device must drop video (input overflow, output underflow, input signal problems), it will skip an even number of fields at the video jack to maintain the sequence. The fields skipped may begin with either a dominant or non-dominant field. Some VL devices allow you to choose F1 or F2 dominance. All other VL devices fix F1 dominance.

 For VL_CAPTURE_FIELDS paths, the device may skip an even or odd number of fields during an input overflow, output underflow, or input signal problem. Therefore, you need a way to determine what field type you are currently enqueuing or dequeuing. This is currently only possible on vino, ev1, ev3, and mvp, where you can use any of DMediaInfo->sequence, USTMSCpair.msc, or an MSC from UST/MSC to determine the field type:
 
 

Jack-to-Jack Paths

Some VL devices support jack-to-jack monitoring or processing paths that do not contain a VL_MEM node. On these paths you can use vlGetPathDelay() to determine the path's jack-to-jack latency in nanoseconds.
 
 

Sample UST/MSC Calculation: Audio Input

First we'll show how to compute the UST of any audio frame you read from an AL input port. The first step is to open the port:
 
 

This diagram shows your program sitting atop the audio subsystem (the AL and the audio hardware). We show an incoming audio signal as a stack of slots, each of which holds one frame of audio. Some of those frames have not entered the computer yet, and some are sitting in the computer waiting for you to read them. As in the diagram above, each slot has an MSC and a UST. The UST is the UST at which the data in that slot will hit, or did hit, the audio input jack of the machine.

 The next thing your program does is to read some data with alReadFrames:
 
 

The AL examples in this document will use reads and writes of 2-5 frames for typographical convenience. Typical programs transfer 10ms or more of audio frames with each alReadFrames() or alWriteFrames(). UST/MSC works the same either way.

 Now you have some data. The next thing you need to do is figure out the MSC of each of your frames. You do that with one call to alGetFrameNumber():
 
 

alGetFrameNumber() returns a frontier MSC of 80. This is the MSC of the next frame you're about to read from the port. Since audio frames come at regular intervals, and since we're assuming that we read the data out of every slot of the signal (ie, we never drop any audio frames on input), we then know that the MSCs of the 4 frames we just read are 76, 77, 78, and 79.

 Now that we know the MSC of each of our frames, we can figure out the UST of each of the frames using alGetFrameTime():

alGetFrameTime() returns a pair of numbers to us: MSC 98 and UST u. This tells us that MSC 98 hits the jack at UST u. We want to know when MSC 76, 77, 78, and 79 hit the jack. We find the UST of MSC 76 by taking u and adjusting it by (76-98) audio frame periods. The audio frame period T is (1/audio sampling rate) in nanoseconds. T is the same as ust_per_msc in the pseudocode above. You can use this simple extrapolation trick to find the UST of any recent MSC, and we do it for MSC 77, 78, and 79 above.

 It doesn't really matter which MSC alGetFrameTime() tells us about. It could return a UST/MSC pair for MSC 100, MSC 96, MSC 80, MSC 76, or any other recent MSC, and your code will still give you a UST typically within a few audio frames of accuracy (on some devices, the UST will be better than audio frame accurate). The reason you need a recent MSC (an MSC within a few seconds of the MSC currently hitting the jack) has to do with a secondary effect called clock drift, which will be explained later.
 
 

Sample UST/MSC Calculation: Audio Output

Now we'll show how to compute the UST of the next audio frame you write to an AL output port. This tells you when that audio frame will hit the jack of the computer. If you want a piece of audio data to play at a certain UST, you write some other data to the port (perhaps silence) until you see that the UST of the next audio frame to go into the port is as close as possible to your desired UST. You then switch to writing your desired data. This is the basic mechanism for synchronizing the output of an audio signal with the input or output of other signals (such as video).

 Remember our assumption that your program fills every slot of the output signal with data. This means that every time the output device needs a new audio frame to output, you will have made one available with alWriteFrames(). If the output device begins to starve for data, we say that the buffer is "underflowing." When you first open an AL output port, there are no frames in the buffer, so the buffer is underflowing. The first thing you need to do, before attempting to use UST/MSC, is to get some audio frames in there! You need to enqueue at least enough silence to handle the worst-case amount of time between enqueuing the silence and taking the first UST measurement.

 Let's assume that you have opened an AL output port and you have written enough data to it to keep the port happy for a while. You want to determine which audio data you should write to the port next, so you want to know the UST of the next audio frame you write to the port:
 
 

Note that MSCs now decrease (data gets older) as you move down the diagram, since the data flows in the opposite direction. The slot shown in dotted lines is the next audio frame you will write to the AL. It's not an audio frame that is currently in the AL buffer.

 As with the input case, one call to alGetFrameNumber() tells you the MSC of each of your audio frames:
 
 

alGetFrameNumber() returns a frontier MSC of 65. This is the MSC of the next frame you're about to write to the port. So the MSCs of your four frames are 65, 66, 67, and 68.

 Be careful: In the input case, the audio frame with the frontier MSC is in the AL buffer. In the output case, the audio frame with the frontier MSC is in your buffer.

 Continuing to follow the pattern, we get a UST/MSC pair and use it to compute the UST of each frame we are about to write:

The explanation here is identical to that for input.

 Now we know the UST of each audio frame, so we can choose some data to output and proceed with the alWriteFrames().

UST/MSC: Using the Frontier MSC to Detect Errors

So far in this document we have been carefully assuming that: These assumptions allow us to use the frontier MSC (returned by alGetFrameNumber() and vlGetFrontierMSC()) to associate an MSC with each piece of data we read or write, so we can get a UST for our data.

 You can also use the frontier MSC to detect overflow and underflow conditions, and determine their precise length.

 Here's an AL example of how to detect overflow in an input buffer:

{     
  stamp_t newmsc, oldmsc=-1;
  
  /* this is your main data-reading loop, not a special one */
  while (1)
    {
      alReadFrames(port, buf, nframes);
     
      alGetFrameNumber(port, &newmsc);
      if (oldmsc > 0)
        {
          stamp_t M = (newmsc-oldmsc) - nframes;
          if (M != 0)
            printf("we overflowed for %lld MSCs!\n", M);
        }
      oldmsc = newmsc;
    }
}
Each time you read nframes frames, check to see how much the frontier MSC has incremented. If the frontier MSC has incremented by nframes+M, then you know that an overflow just occurred, and that it lasted exactly M sample periods.

 Use this basically identical code to detect underflow in an AL output buffer:

{     
  stamp_t newmsc, oldmsc=-1;
  
  /* this is your main data-writing loop, not a special one */
  while (1)
    {
      alWriteFrames(port, buf, nframes);
     
      alGetFrameNumber(port, &newmsc);
      if (oldmsc > 0)
        {
          stamp_t M = (newmsc-oldmsc) - nframes;
          if (M != 0)
            printf("we underflowed for %lld MSCs!\n", M);
        }
      oldmsc = newmsc;
    }
}
Each time you write nframes frames, check to see how much the frontier MSC has incremented. If the frontier MSC has incremented by nframes+M, then you know that an underflow just occurred, and that it lasted exactly M sample periods.

 This technique works just as well with the VL.

 For more information on how and why this works, check out UST/MSC: Using the Frontier MSC to Detect Errors.

UST/MSC in the Graphics Libraries

UST/MSC fits quite nicely into the double-buffered model of OpenGL and IRIS GL, but currently there is no graphics UST/MSC API. For more discussion of this and some methods you can use to get USTs now, see UST and Graphics.
 
 

UST in the CL for cosmo1 and cosmo2

The cosmo1 and cosmo2 devices support direct paths from video to compression to memory, and memory to decompression to video. You use clGetNextImageInfo() to retrieve UST and field count information:
 
 
typedef struct {
    unsigned size;       /* size of compressed image in bytes */
    unsigned long long ustime;
    unsigned imagecount;
    unsigned status;     /* additional status information */
} CLimageInfo;
The API is UST timestamping for input and something bizarre for output:
 
  The cosmo1 and cosmo2 boards violate the rule about UST being at the video jack. To do analog video I/O with cosmo1, you plug your cosmo1 into an ev1 board (see ev1 and cosmo1). To do digital video I/O with cosmo2, you wire your cosmo2 to an ev3 digital video jack with the VL (data flows over the GIO or XIO bus). In both cases, the UST returned by clGetNextImageInfo() specifies when the associated field hits the cosmo1/cosmo2 board's jack/bus, not the ev1/ev3 board's video jack. Your software must compensate for the delay through the ev1/ev3 board. For video to compression to memory, this delay is on the order of a few video lines. For memory to decompression to video, it is either a few video lines, or one frame time plus or minus a few lines. In both cases you can query the delay with vlGetPathDelay().

 The CL interface for cosmo1 and cosmo2 always deals in fields. The cosmo1 board is always F1 dominant (see Definitions: F1/F2, Interleave, Field Dominance, and More). You set cosmo2's dominance using VL_MGC_DOMINANCE_FIELD on the VL_CODEC node (the default is F1 dominance). Here is how to tell which of your fields is dominant and which is non-dominant:
 
 

Clocking and Clock Drift: Relating Multiple MSC or UST Clocks

All the clocks in your system are driven by electrical oscillators. This includes the UST clock, the gettimeofday() clock, an audio port's clock (which drives its MSCs), and a video path's clock (which drives its MSCs). Sometimes the oscillator is outside your machine. Oscillators are imperfect. Two oscillators with the same nominal rate may run at slightly different actual rates depending on their manufacture, their age, or even the temperature.

 If your software makes an assumption about the rate ratio of any two clocks, then it must assure that either:

Movie Playback Example

An example will make this clear. A movie file stores audio at some sampling rate, say 44100 frames per second, and video at some field rate, say 50 fields per second. The file contains exactly 44100 audio frames for every 50 video fields. Say you want to play such a file. Your program sets up an AL port at 44100, and a VL path at 50. Your program uses UST/MSC to prime the AL path and VL port so that the next audio and video data written will be simultaneous at the jack, and then it begins to write groups of 44100 audio frames and 50 fields from the movie file. The 44100 Hz heartbeat that tells your audio hardware when to output a new frame comes from an oscillator. The 50 Hz heartbeat that tells your video hardware when to output a new field comes from an oscillator.

 Say your audio system and your video system are clocked off of the same oscillator (case 1). This would be the case if your video system and your audio system are locked to a common external blackburst signal, for example. If that oscillator is running a bit slow or fast compared to some other oscillator (say the oscillator in your wristwatch), it's no big deal: your audio system will not quite be running at 44100 Hz, but your video system will have the exact same error, so the audio and video you provide to them will stay in sync. This corresponds to case 1 above.

 Now say your audio system and your video system are not clocked off of the same oscillator (case 2). This is a common situation on lower-end SGI systems, where users may not have the knowledge, desire, or equipment to slave their audio and video devices to a common oscillator. Then the ratio of the rates of your audio and video system may not actually be 44100/50. As your program runs, the sound coming out the audio jack will slowly drift ahead of the image coming out the video jack, or vice versa. This drift will continue to worsen without bound as playback proceeds.

 To take a worst-case example for typical hardware, say the audio oscillator is 50 parts per million fast (44102.205 Hz) and the video oscillator is 50 parts per million slow (49.9975 Hz). After playing for 10 minutes, the video will have drifted 60ms (3 fields) behind the audio, which is quite noticeable. What's worse, since your program continues to send 44100 audio frames for every 50 video fields, but since the devices consume 44100 audio frames for every 49.995 (roughly) video fields, an unbounded amount of video data slowly builds up between your program and the video device (in this case that data will build up in a VL buffer). Eventually you will run out of memory.

 Say your application requires field accuracy (audio and video in sync within ±10ms). If your application never deals with movie files longer than 1 minute, then your audio and video will never drift apart more than 10ms in either direction, and no more than one extra field will collect in the VL buffer, so you have no problem. This is case 2a.

 If your application can deal with movies of any length, then you have case 2b. In this case, you cannot achieve any upper bound on synchronization error or memory usage. You must measure the actual ratio of audio frames per video field, and adjust the data coming out of the movie file so that it actually has that ratio going into the devices. Depending on the quality constraints, this may require sampling rate converting the audio data, dropping or duplicating video fields, or other tricks. UST/MSC gives you a way to measure the ratio. To do this, grab two audio UST/MSC pairs spaced a second or more apart (call them (aust1, amsc1) and (aust2, amsc2)). Also grab two video UST/MSC pairs spaced a second or more apart (call them (vust1, vmsc1) and (vust2, vmsc2)). The ratio of audio frames to video fields is:
 
 

Other Examples

The clock drift problem described above happens in many other scenarios:

Where's the Oscillator?

This chart helps you locate the oscillator which drives the clocks used by your program. All of the crystal oscillators mentioned are different oscillators:
 
 
Clock Where's the oscillator?
UST crystal in your machine
gettimeofday() no time daemon crystal in your machine
time daemon running in the timed master or NTP stratum-0 clock
video input signal contains clock 
(oscillator is in upstream equipment)
video output internal sync crystal in your machine
genlock/slave sync sync source signal contains clock 
(oscillator is in upstream equipment)
digital audio input 
(AES/ADAT)
signal contains clock 
(oscillator is in upstream equipment)
analog audio input 
analog audio output 
digital audio output (AES/ADAT)
sync source set to AES/ADAT input signal sync source signal contains clock 
(oscillator is in upstream equipment)
sync source set to internal video video board provides clock internally 
(see video above)
sync source set to external blackburst external blackburst signal contains clock 
(oscillator is in upstream equipment)
sync source set to internal crystal in your machine 
If you have not used apanel, vcp, or some AL or VL code to explicitly specify a sync source for video output, analog audio input, analog audio output, or digital audio output, then your oscillator is most probably a crystal inside your machine.