Next Generation VLBI station

Third draft

 

Jouko Ritakari, Jouko.Ritakari@hut.fi

Metsähovi Radio Observatory

January 25, 2001

 

The purpose of this document

 

In this document I will explore the impact of new data communications and recording technologies on VLBI data acquisition hardware.

At this moment we are at a cross-over point where the capabilities of commercial equipment are roughly the same as the capabilities of custom–made VLBI equipment. Two years ago the commercial equipment had no chance, two years from now it is a clear winner.

The author acknowledges financial support for writing this document by European Commission ICN RadioNET contract HPRI-CT-1999-40003, ``Infrastructure Cooperation Network in Radio Astronomy''.

 

Background

 

If we reverse-engineer the existing VLBI station hardware, we see that most of the complexity results from the following constraints:

 

 

 

The world around us has changed:

 

 

Next-generation VLBI station

 

Clearly the VLBI systems must be able to use IP-based data networks.

 

We have two alternatives here: Either modify the old equipment or build an entirely new instrument.

 

In the following I will outline the possible solutions if we decide to build the terminal from scratch. Of course, the optimal VLBI terminal would be simple and powerful, easy to build and require no tuning. That means that the new terminal would use digital logic wherever possible.

Direct IF sampling

 

Digital IF sampling would simplify the design greatly, the amount of phase-locked oscillators would be reduced to one. To get the same performance we have now, we need at least a one gigasample per second, 2-bit sampler. Of course, five-gigasample 4 to 8 bit sampler would be nice, if we intend to do signal processing.

 

Fortunately, out there is an industry that makes high-speed samplers. They call them digital oscilloscopes and bundle the sampler with a screen and lots of knobs.

 

Digital oscilloscopes are relatively inexpensive. A low-spec one costs 1100 euros (four channels, one gigasample per second each) and a relatively high-spec one costs 7000 euros (two channels, five gigasamples per second each).

 

Of course, these oscilloscopes would need some modifications, but these should not be very difficult. The Japanese have succeeded in doing this in their Giga-bit VLBI system.

 

Here is an excerpt of Yasuhiro Koyama’s conference paper. The full paper is available at http://ivscc.gsfc.nasa.gov/publications/gm2000/koyama/

"The sampler unit has been developed based on a commercially available digital oscilloscope products (Tektronix TDS784/TDS580). The oscilloscope unit has a high speed analog-digital sampler chip which operates at the speed of 1024 Mbps

(bit-per-second) with a quantization level of 4 bits for each sample. One of the 4

quantization bits is extracted from the digital oscilloscope and is connected to the

sampler interface unit. The sampler interface unit demultiplexes the 1024 Mbps of

serial data stream to 32 parallel lines."

 

These oscilloscopes are obsolete and no longer available, but new models should not be too different.

Digital BBCs, do we really need them?

 

Here we have two choices:

 

Both of the approaches have their benefits, the choice is not simple. In both cases the processed signal goes to a hardware-based UDP-IP framer that sends the formatted packets to the correlator or recorder units using 100 Mbit/s or Gigabit Ethernets.

Wideband VLBI

 

The main advantage of this approach is simplicity. If bandwidth is cheap, why do any gymnastics to the signal. Just send it, correlate everything and select the results you need.

 

This is the approach the Japanese use in their Giga-bit VLBI system.

 

At http://veraserver.mtk.nao.ac.jp/ you can see pictures of the (presumably) same system. If you remove the tape recorders and keep the essential VLBI electronics, the system is quite simple. Note especially the nice prototype filter board.

 

If we use gigabit sampling, we must time-multiplex the signals to several different correlator engines. This approach is explained in the ALMA project, see http://alma.nrao.edu/.

Use of digital BBC

 

The design of a digital BBC has been discussed in the ALMA memo #305, "A digital BBC for the ALMA interferometer"

http://alma.nrao.edu/memos/html-memos/abstracts/abs305.html . At this moment it has been decided that the ALMA project will use direct IF sampling and digital filters, but not digital BBCs.

 

If we decide to select narrow bands we want to observe, the digital BBC seems to have several advantages over the old analog BBC. Main advantages are that digital BBCs are more reliable and easier to manufacture (and less expensive).

Data storage on tape or hard disks

 

The digital filter or the digital BBCs send the data in UDP packets via 100 Mbit/s or Gigabit Ethernets. In the final version the data goes via Internet directly to the correlator engines.

 

We can use the system even if high-speed data links are not available. The data can be stored on COTS magnetic tape drives and the tape can be sent to the correlator by mail. A better solution would be to store data for a short term on microcomputer hard disks and transfer it to correlator via relatively slow-speed (100 Mbit/s to 1 Gbit/s) Internet connections. This has been described in the article "Concept for Next Generation VLBI", available in http://kurp.hut.fi/vlbi/instr/nexgen.html.

 

Real time data streams vs. files

 

Traditionally VLBI stations record real time data streams on magnetic tapes. The tapes are shipped to the to the correlator that plays them back and synchronizes the data streams. However, almost nobody uses real time data streams in Internet or stores them on hard disks or tapes.

 

In the next generation VLBI system, a better approach would be to consider the correlator to be a special computer that batch-processes chunks of data. The data is stored in files that can be recorded on hard disks or magnetic tapes or transferred via Internet using standard Internet protocols.

 

Multiplexing and fan-out

 

Of course, the trivial solution is to use one recorder (or recorder track) for each baseband.

This solution has the drawback that the digital filters or BBCs must be designed to fit the capabilities of the recorder units and not vice versa. Another drawback is that lots of recorder units are needed, especially in geodetic observations.

 

Obviously we need some device to multiplex several slow-speed data streams to one recorder unit or fan-out one high-speed data stream to several recorder units. Here we have two choices: either develop a radio astronomy-only solution or buy one from the nearest computer store.

 

If we decide to buy the unit, Gigabit Ethernet switches are practically the only game in town. They are inexpensive, widely used and compatible with Internet.

Data communication protocols

 

In data communication protocols we have two choices: UDP/IP or TCP/IP.

 

UDP/IP

 

UDP is a datagram protocol. The UDP packets are sent to the network with the hope that they will reach the destination. If the packet is destroyed or discarded somewhere along the path, no further action is taken.

 

UDP is a very simple protocol. From the design point of view, the main strength of UDP is that it can be implemented in programmable logic. The digital BBC can send the UDP packets without processor intervention. The same applies to recorder unit or correlator engine UDP/IP packet receiver.

 

UDP is especially well suited to be used in connecting the digital BBCs to recorder units, since local area networks are reliable and usually no packets are lost. In wide area networks some IP packets tend to be lost and some mechanism to retransmit the lost packets may be needed.

 

UDP packets may be lost in transmission or the sequence of the packets may have changed, especially in wide area networks. However, a simple sequence number or timecode should be enough to ensure that the data streams are not time-shifted or otherwise disorganized.

 

It is important to note that attempting to receive a high-speed stream of UDP packets with a general-purpose computer is not a good idea. Reception of packets is extremely time-critical since the input buffers are usually quite small.

TCP/IP

 

TCP/IP protocols (for example FTP) are a connection-oriented. If an IP packet is lost, the handshaking ensures that it is retransmitted. Because of this mechanism there is nothing time-critical in using for example FTP.

 

TCP/IP (here FTP) is especially useful when we transport data via Internet to the correlator. The main disadvantage of TCP/IP is that it is too complicated to fit in programmable logic.

 

Practical considerations in designing a digital BBC

 

Almost all the technology is available off the shelf, either as modules or chips.

At this moment the samplers and signal processors are usable for VLBI purposes. If the trend continues, in two or three years we can build four gigasamples per second systems fairly easily.

 

High-speed samplers

At this moment high-speed A/D converters are widely used in digital oscilloscopes and are available as bare chips. Unfortunately, stand-alone sampler units are not commercially available, possibly due to lack of markets.

Some examples of the A/D converter chips are Maxim's MAX108 and MAX104. The MAX108 is a eight-bit ADC with 1.5 gigasamples per second speed and an on chip 2.2GHz track/hold amplifier. The less expensive MAX104 goes only to 1 GHz sampling speed. Both of these chips have 8-to-16 demultiplexed PECL outputs. More data about these ADCs can be found at www.maxim-ic.com.

Designing a two gigasamples per second sampler is a non-trivial task. Very probably the best approach would be to utilize the sampler part of an existing oscilloscope, which is fairly easy. For example, Gage's CompuScope 82G is a two gigasamples per second, PCI-card based oscilloscope that is built as a sandwich of two boards. One board contains the PCI interface, the other contains all the sampler-related electronics. It would be relatively easy to discard the PCI interface and keep the sampler. More information about this oscilloscope can be found at http://www.gage-applied.com/.

 

Data demultiplexing

Typically, the (one gigasample per second) ADC outputs the data demultiplexed into two 500 megasamples per second data streams. It is possible to connect two (or more) ADCs in parallel to increase the bandwidth by doing gymnastics with the track/hold amplifier timing.

If we want to store the data to commonly available memory or do signal processing, we must further demultiplex the data into 64 bit or 128 bit wide bus.

Fortunately, the new FPGA chips (for example Xilinx Virtex-E or Virtex-II) can input differential signals at the speed required. Virtex-E can handle signals up to 622 MHz and Virtex-II up to 800 MHz speed. Of course, only small part of the chip can run at this speed, the rest must operate on demultiplexed data.

Pre-correlator data processing

Signal (or in this case sampled data) must be divided into sub-bands or time-demultiplexed before feeding it to the correlator.

The new FPGAs seem to be well suited for digital signal processing. The Virtex-II is claimed to be able to perform 600 billion multiply-and-accumulate cycles per second, perform 256-tap FIR filtering at 180 MSPS or perform 1024 point FFT for 16-bit data in less than one microsecond.

Alternatively, DSPs might be used. The announced C64 DSP from Texas Instruments is expected to have clock rates up to 1.1 GHz and should be able to process a 125 megasamples per second signal.

 

Implications

If efficient, reliable and low-cost VLBI hardware becomes available, it has the following implications:

Related work

 

Digital filters and digital BBCs are being developed (or are in use) in several projects: