Particpants include: Amphenol, Applied Micro, Cisco, Finisar, Fujitsu Optical Components, Inphi, Molex, MoSys, Semtech, TE Connectivity, and Xilinx.
Technologies demonstrated during the testing include host ICs with VSR SerDes capability, host PCB traces, optical module connectors, high-speed electrical I/O and backplane connectors, module retimers, heat sinks and optical transceivers all operating with 28G electrical interfaces. Agilent Technologies Inc. and Tektronix, Inc. will supply test equipment used in the demonstration.
As designers develop 400Gbps line cards and consider 1Tbps designs, serial interfaces will play dominant roles. At 400G, the traditional packet processing models are broken. High-performance-serial networking memories, gearboxes, and nx25G retimers will emerge to address 1Tbps ecosystem requirements. Pin I/O efficiency, reach and functionality are critical to deliver performance and manage interconnect, package and power limitations. This presentation will explore line-card architecture from a serial-interface devices (datapath & look-aside) perspective and the ramifications as they evolve towards 1Tbps.
Memory access rate is a primary performance bottleneck in high-performance networking systems. The MoSys Bandwidth Engine® family of ICs provides a significant improvement in effective memory performance by using high-speed serial I/Os, many banks of memory, a low-latency, highly efficient protocol, and intelligence within the device. The first member of the family can perform 2 billion 72-bit reads per second or 1 billion read-modify-write operations per second.
The full paper is available to IEEE Xplore subscribers or for purchase via IEEE Micro.
"At last month’s OFC, MoSys announced a 100Gbps gearbox PHY for data centers and networking applications. The MSH310 gearbox multiplexes and maps 10 lanes running at 10Gbps to 4 lanes running at 25Gbps. Per the standard requirements, it de-skews signals across the 10 lanes, adjusting for different propagation delays to enable 100G Ethernet and to simplify board layout. The chip also performs demultiplexing functions from 4x25Gbps to 10x10Gbps."
"Mushrooming Internet traffic is giving rise to greater density line cards where the bandwidth between an NPU/ASIC or FPGA and memory is becoming a bottleneck. Directly addressing this issue, the 15Gbps GCI from MoSys is the fastest interface available for networking designs. In a 100GbE packet time, the BE-2 can perform eight operations, which could be used to read memory, write memory, update statistics, perform metering, and read tables. This new device is an attractive solution for high-capacity line cards alongside a Xilinx or Altera FPGA, which supports GCI."
November 12, 2012 - Linley Group Microprocessor Report
Networking Spurs Memory Evolution
"In 1975, people were queuing up to buy Pong, four-function calculators cost $40, and RAM was faster than microprocessors. Technology has changed for the better since then—except for the relative performance of memory and processors. Memory is now s-l-o-w, and this is a problem. Computers can hide behind caches, but high-performance real-time systems can’t. At the recent Linley Tech Processor Conference, GSI, Micron, and MoSys presented various solutions for improving memory technology. All three have memories for designers of high-speed networking systems who are willing to pay more per bit to achieve greater performance and advanced functions while avoiding the hidden cost of commodity DRAM."