On the surface the Equal-i technology can seem confusing since it's a new way to solve an old problem. Let this article serve to minimize any confusion regarding the exciting products from Array Telepresence, where we create a better on-screen experience in your conference rooms.
DX Camera and 2S Image Processor
The heart of Equal-i's technology is the ability, in hardware, to equalize the meeting participants where everyone on screen is the same size. This is done via Array's 2S Image Processor. Our dual head DX camera, where each camera head captures one half of the room, is connected to the image processor then off via HDMI cables to your existing codec's video and content channels. The combination of the DX camera and 2S processor work as a team and these products are sold as a set. The processor will work with any codec that can accept a signal of 1080p 25/30FPS for both the video and content channels (more on that below).
The 2S processor has three operating modes; Immersive Everywhere, Video+Content and Immersive PTZ.
The operating mode Immersive Everywhere creates a dual screen immersive telepresence-like experience. This is accomplished utilizing your existing codec's video and content channels. One output of our Image Processor plugs into the video channel. The other output plugs into the content channel. From the start of the call a content share is initiated. But instead of data, one half of the room is being transmitted to the remote site via the content channel provided they have dual displays. This is why a 25/30 FPS (minimum) content channel is necessary as we're transmitting video over both this and the video channel simultaneously.
Want to actually display content? No problem. Our Video+Content mode continues the content share but our technology combines the dual camera images into a single, 1080p stream allowing content to be displayed on the second display. This operating mode is ideal for multi-point and soft codec calls as well. This mode can also be used if your codec cannot transmit 25/30 FPS out of its content channel or the far end's codec cannot receive it. The equalization is still there and the experience is still much better than a traditional PTZ but now you're just using the single video input of your codec.
If the remote participants don't have Equal-i, wouldn't it be great to be able to do something with their incoming image? With our Immersive PTZ mode, we can take any incoming "typical" PTZ view, split it, apply image improvement equalization to it and throw each half out to the displays. This creates an immersive type feel from an endpoint where all they have is a typical PTZ camera. How does this work? From your codec, the left and right display out goes back into out Image Processor - then out to the displays from there. If the remote side shares content, no problem, you just switch back to their normal PTZ view then content will show up on the secondary display.
I hope this quick overview answers any questions about the operating modes and our technology, but it may create even more. If it does, feel free to reach out to me with any questions you have.
Director of Engineering