Seervision or Automatic Camera Preset Recall: What’s the difference?


With the recent introduction of Q-SYS VisionSuite, I wanted to address a common question I’ve been hearing from customers about the components of this Q-SYS technology offering: How do I know if Automatic Camera Preset Recall (ACPR) functionality or Seervision Presenter Tracking is right for my space?  

The short answer is: It depends on the desired outcome in your space. Let’s dive into what we recommend as the ideal use case for each solution.

What’s In a Name?

I should first acknowledge that we’ve evolved how we refer to these solutions. Q-SYS Intelligent Audio features leverage our Automatic Camera Preset Recall (ACPR) functionality, which we’ve previously referred to as Participant Camera Switching.

Intelligent Audio

This is the right solution to ensure the far-end always sees the audience member or meeting attendee that is actively speaking. It does this by using the location data from in-room microphones to trigger different user-defined camera presets, while automatically framing the best shot.

What’s great about Q-SYS intelligent audio is that it runs on the Q-SYS Core, so you don’t need additional hardware beyond the microphones and Q-SYS cameras you already have in the space. Instead, it gets added to your system at the software level (plugin available in Q-SYS Designer Asset Manager).  Just configure your camera presets and zones, and Q-SYS will handle all the switching. Of course, the more Q-SYS cameras in the space, the more flexibility you’ll have to ensure the right shot. For most typical meeting room scenarios, intelligent audio is all you’ll need!

Intelligent Presenter Tracking

On the other hand, if you have dedicated presentation areas or stages where fluid camera movement is crucial, then look no further than the adaptive, full-body presenter tracking technology enabled by the Q-SYS Seervision AI accelerator.  It’s perfect for lecture halls or high-impact corporate spaces where a lecturer wants the freedom to walk about the space with natural camera direction.

It also leverages “Trigger Zones” to trigger different actions based on the presenter’s location. This can be ideal when a presenter stands at a whiteboard and you need a cropped static view so the far-end can see without any camera movement before switching seamlessly back to presenter tracking.

Best of Both Worlds

Of course, combining these solutions results in a true multimodal AI experience that empowers teams to feel united and stay engaged no matter where they are. In fact, most of the systems that have been speced recently with Q-SYS VisionSuite are invoking both technologies. It’s driven by the Q-SYS control engine, which handles all the camera switching and in-room automation for seamless hybrid meetings and lectures.

Speaking of cameras, the Q-SYS NC Series PTZ cameras can be deployed for either intelligent audio OR presenter tracking, ensuring simple and flexible system design.

Learn more about Q-SYS VisionSuite and Request a Demo!