Axxon One DetectorPack 3.14.1 — Partner Overview
The release of DetectorPack 3.14.1 introduces new functionality and refinements to the Axxon One video analytics suite. This version combines a new flagship detector with a broad set of enhancements across multiple analytic modules, alongside architectural changes that partners should be aware of for deployment and maintenance. Below is a detailed overview of what’s new, how it works, and what to expect going forward.

Flagship Feature: Area Occupancy Detector
The central innovation of DetectorPack 3.14.1 is the Area Occupancy Detector, built on a segmentation neural model — the first of its kind used in AxxonSoft analytics. Unlike object-specific detectors, this tool evaluates how much of a defined zone is occupied, delivering a percentage of occupancy and triggering an event when thresholds are met.
Use Cases
Moreover, schools face increasing pressure to comply with regulatory frameworks such as FERPA (Family Educational Rights and Privacy Act), GDPR (General Data Protection Regulation), and a patchwork of state and municipal laws. School camera systems must not only be effective, but ethical by design - ensuring transparent operation, respecting personal privacy, and offering strict audit controls over footage and user access.
How it Works
The detector highlights segmented objects in green overlays for clarity. User sets:
- Minimum/maximum object size (filters irrelevant items, e.g., cars, people).
- Occupancy threshold — event generated once threshold is exceeded.
- Statistics output — optional, provides occupancy percentage at set intervals (for example, every 5 or 10 minutes).
Additional Notes
- Can handle cluttered objects if boundaries are visible.
- Works best on uniform backgrounds (e.g., warehouse floors). Complex or textured backgrounds (like colorful tiles) may interfere.
- Objects like cars, people, or temporary movements are filtered out with size thresholds and frame averaging.
- More resource-intensive than some detectors, but designed for infrequent runs (spikes rather than constant load).
- Future roadmap: apply classification neural filters to stabilize performance in scenes with dynamic backgrounds (e.g., moving trees near outdoor bins).
Technical Notes
- Works best with objects that have clear, solid borders (e.g., boxes, bags, trash piles).
- Events can be triggered when occupancy exceeds a configured threshold.
- If false triggers occur, partners can request retraining for more accuracy.
Inside the Detector Pack: How It Operates and Performs
Enhancements to Existing Detectors
Human Pose Detector:
Three new standard YOLO models (Nano, Middle, Large) expand flexibility for different performance/accuracy needs.
Multi-GPU Mode (Intel only):
Allows simultaneous use of multiple Intel GPUs, balancing loads automatically across them. You can also manually assign specific detectors to individual GPUs.
Track Lifespan Parameter:
Added to several detectors, including Neuro Tracker, Object Tracker, and Human Pose Detector.
- Provides statistics on how long an object was present in view (defined area).
- Currently viewable via dashboards; future releases will display this in real-time on camera layouts.
GPU Optimizations (NVIDIA):
- Better resource usage for Neuro Tracker, Human Pose Detector, and Barcode Detector.
- Fixes CPU overuse when running analytics on GPUs (e.g., disabling underperforming color-detection sub-processes).
License Plate Recognition (LPR):
- Expanded support to new regions: Belize, New Zealand, Turkey, UAE, Tanzania.
- Added vehicle movement direction filtering.
Fire & Smoke Detector:
Improved operation in normal mode and scanning mode for better performance in different environments.
Face Detectors. New configuration parameters:
- Face occlusion threshold (ignore partially visible faces).
- Open eye threshold (ignore incomplete or low-quality captures).
Crowd Estimation Detector:
Delayed start option prevents CPU overload when multiple detectors launch simultaneously.
Fight Detector:
Added detection threshold parameter for greater flexibility in tuning sensitivity.
Renaming & Structural Changes
Tampering Detector → Video Clarity Detector — better reflects its actual purpose.
Vehicle Recognition XR → License Plate Recognition XR — clarified to avoid confusion.
RR Packages:
- Now require installation of a separate SDK package.
- Important: simply updating the old analytics packages will not work — integrators must ensure SDK deployment is included.
Discontinued or Removed
- LPR BRS: Support discontinued due to lack of demand.
- Sub-detectors for VI detectors and Stopped Object Detector: Removed due to limited functionality and lack of real utility.
Hardware & GPU Support
Current Release (3.14.1):
Supports NVIDIA Turing and Ampere GPU generations.
Future Release (3.15):
- Will add support for NVIDIA Copper and Blackwell (50xx series).
- Will drop support for older GPUs (e.g., GeForce 1050/1060 and below).
- Partners must verify client hardware before upgrading, as outdated GPUs will no longer run analytics or even decoding.
This transition is dictated by NVIDIA discontinuing backward support in their libraries, not AxxonSoft policy.
Roadmap & Limitations
Meta-Detector:
- Scene description data storage is already in place.
- Full text-based meta search support will arrive with the next Axxon One release.
- Currently, event details show text queries used and are accessible in tooltips; frame-area highlighting is still under investigation.
Zoom-In for Detection Areas:
- Feature already developed at UI level.
- Will be introduced in the next major release (mouse wheel zoom while defining zones).
Want to see these solutions in action?
Schedule a personalized consultation to find the perfect video strategy for your business.
Book a presentationConclusion
DetectorPack 3.14.1 is a bridge release: it delivers a new Area Occupancy Detector, improves many existing analytics, and begins transitioning the ecosystem toward next-generation hardware support. While it removes outdated components, it also sets the stage for future capabilities like advanced meta search and more precise UI controls.
Partners should review hardware readiness, SDK deployment steps, and detector configuration options to ensure smooth adoption and to fully leverage the new functionality.
FAQ
Can the new Area Occupancy Detector replace abandoned object detection?
No. The Area Occupancy Detector measures the overall fill level of a defined zone, recalculating static masks every few seconds. Abandoned Object Detection, on the other hand, tracks individual objects over time to detect when something has been left behind.
These detectors serve different purposes and should be used separately depending on the scenario.
Does the Area Occupancy Detector work for cars in parking lots?
Technically yes, but it is not recommended. For parking applications, Neuro Tracker with car-specific models is more accurate.
Can I reduce CPU load if I’m running all analytics on NVIDIA GPUs?
Yes. Two adjustments help:
- Disable color detection (currently under redevelopment).
- Make sure that the “Hide static objects” parameter is disabled.
Both reduce CPU consumption and optimize GPU-based workflows.
Will existing LPR detectors automatically support the new countries?
Yes, support for new regions (New Zealand, Turkey, UAE, and Tanzania) is included in the updated DetectorPack 3.14.1. However, the new functionality becomes available only after updating to this release. No additional licensing or manual activation is required once the update is installed.
What happens if my clients update to 3.14.1 but still use older GPUs like GeForce 1050?
Older GPU models (such as GeForce 1050/1060 and earlier) are no longer supported starting from version 3.14.1. Running analytics or even video decoding on unsupported GPUs may fail or cause system instability. Before updating, partners should verify hardware compatibility to ensure stable operation.
Do partners need to install the new RR SDK package manually?
Yes. Without the separate SDK, RR detectors will fail after update. Partners should plan SDK deployment as part of their upgrade process.
Can the Area Occupancy Detector work with shelves in retail stores?
Not yet. Current release supports generic area monitoring. A specialized shelf detector is on the roadmap as a separate feature.
What if the background is noisy (trees, moving leaves, colorful patterns) when applying the Area Occupancy Detector?
Detection may become unstable. Partners can use classification neural filters to exclude unwanted segments. This capability is under further improvement.
Can the Area Occupancy Detector identify under-occupancy (e.g., alert when area is empty)?
Not directly. Detector only triggers when occupancy exceeds a threshold. Under-occupancy is only available as statistical output.
What is the load impact of the Area Occupancy Detector compared to Neuro Tracker?
Slightly higher per run. However, since the detector can work in periodic bursts (1 minute or more), the overall impact is manageable.
Can these detectors be used both in Axxon One and Axxon PSIM?
No. This release is for Axxon One only. The PSIM platform has a separate Detector Pack, incompatible with this one.
How can partners demo the Area Occupancy Detector to customers?
AxxonSoft will provide promo/demo videos (examples from Safe City garbage projects and warehouse monitoring) to use in partner presentations.
簡短閱讀



