
The era of Physical AI is upon us, and with it, a significant transformation in cybersecurity risks. On May 7, 2026, VicOne, a cybersecurity company, unveiled its research on Physical AI attacks during the CYBERSEC 2026 conference. The company, in collaboration with DiCai Technology and Changlian Technology, is advocating for the completion of security validation for AI robots before their deployment. This initiative addresses critical aspects such as privacy protection, cybersecurity compliance, and the safety of AI models in fields like public patrol, healthcare, and long-term care services.
VicOne’s CEO, Cheng Yi-Li, emphasized that as AI begins to control robots, autonomous systems, and various physical devices, the challenges associated with these technologies are rapidly expanding. The company aims to leverage its extensive research and practical experience in automotive cybersecurity, extending these insights into Physical AI scenarios to help industries identify, verify, and mitigate cybersecurity risks before AI is integrated into real-world applications.
Furthermore, VicOne asserts that security measures should not only focus on post-deployment detection but should also be incorporated into the design, testing, and pre-deployment validation phases.
DiCai Technology, utilizing its self-developed “Privacy Security Enhanced AI,” has created autonomous patrol solutions specifically designed for sensitive environments such as hospitals, exhibition centers, campuses, and airports. To enhance the system’s reliability, DiCai has partnered with VicOne to move the cybersecurity validation process to the pre-deployment phase, ensuring that the autonomous patrol systems pass rigorous cybersecurity testing and behavioral stability verification before entering real-world environments.
According to DiCai’s General Manager, Zou Yao-Dong, “Clients in public and sensitive environments must have their privacy, compliance, and behavioral stability validated simultaneously when implementing autonomous patrol systems. This partnership with VicOne is a tangible commitment to our customers, ensuring that every system deployed is verified, predictable, and trustworthy.”
Changlian Technology’s AI system, AiBao, utilizes generative/agent AI and large language models (LLM) to cater to healthcare applications. In addition to providing in-hospital navigation services, it also assists in patient education and consultation services. Changlian’s Business Operations Director, Cai Shu-Xian, remarked, “The healthcare and long-term care sectors require highly stable, reliable, and compliant AI services. Through our collaboration with VicOne, we aim to strengthen AiBao’s readiness in AI model safety and cybersecurity compliance, allowing healthcare AI robots to be deployed and used more securely in actual settings.”
As AI technology transitions from screen to reality, its applications in robotics, autonomous patrol systems, and healthcare services are proliferating, bringing about a fundamental change in cybersecurity risks.
Original article by NenPower, If reposted, please credit the source: https://nenpower.com/blog/vicone-advocates-for-pre-deployment-security-verification-of-ai-robots-amid-evolving-cybersecurity-risks/
