Intelligent Autonomous Drones With Cognitive De...
WEST PALM BEACH, Fla.--(BUSINESS WIRE)--Levatas, developers of AI software that enables robots, drones, remote sensors, and fixed cameras to execute operational tasks at industrial sites, announced that it has raised $5.5 million in a seed round led by Castellan Group. Levatas was founded by CEO Chris Nielsen, along with partners Ryan Gay and Daniel Bruce, who serve as Chief Financial Officer and Chief Product Officer, respectively.
Intelligent Autonomous Drones with Cognitive De...
Based in West Palm Beach, FL, Levatas is the leading developer of cognitive intelligence for automating industrial inspections. Levatas creates and delivers end-to-end solutions that enable robots, drones, remote sensors and camera systems to autonomously perform equipment monitoring, safety checks, and site surveillance tasks in industrial environments. Learn more at www.levatas.com
WEST PALM BEACH, Fla., May 17, 2022--(BUSINESS WIRE)--Levatas, developers of AI software that enables robots, drones, remote sensors, and fixed cameras to execute operational tasks at industrial sites, announced that it has raised $5.5 million in a seed round led by Castellan Group. Levatas was founded by CEO Chris Nielsen, along with partners Ryan Gay and Daniel Bruce, who serve as Chief Financial Officer and Chief Product Officer, respectively.
The mission of the Center for Brain-inspired Computing (C-BRIC) is to deliver key advances in cognitive computing, with the goal of enabling a new generation of autonomous intelligent systems such as self-flying drones and interactive personal robots.
By bringing together a unique team of leading researchers from the fields of machine learning, computational neuroscience, theoretical computer science, neuromorphic hardware, distributed computing, robotics and autonomous systems, C-BRIC will pursue quantum improvements in cognitive systems that will be difficult for these communities to achieve independently.
The field of robotics has seen many exciting achievements recently, and this innovation will continue to change how humans interact with the world around them. Additive manufacturing, also known as 3-D printing, will radically reshape manufacturing and the commercial insurance products of the future. By 2025, 3-D-printed buildings will be common, and carriers will need to assess how this development changes risk assessments. In addition, programmable, autonomous drones; autonomous farming equipment; and enhanced surgical robots will all be commercially viable in the next decade. By 2030, a much larger proportion of standard vehicles will have autonomous features, such as self-driving capabilities. Carriers will need to understand how the increasing presence of robotics in everyday life and across industries will shift risk pools, change customer expectations, and enable new products and channels.
IoT sensors and an array of data-capture technologies, such as drones, largely replace traditional, manual methods of first notice of loss. Claims triage and repair services are often triggered automatically upon loss. In the case of an auto accident, for example, a policyholder takes streaming video of the damage, which is translated into loss descriptions and estimate amounts. Vehicles with autonomous features that sustain minor damage direct themselves to repair shops for service while another car with autonomous features is dispatched in the interim. In the home, IoT devices will be increasingly used to proactively monitor water levels, temperature, and other key risk factors and will proactively alert both tenants and insurers of issues before they arise.
This article explores the benefits of using cognitive science as part of an AI education in Western military organizations. Tasked with educating and training personnel on AI, military organizations should convey not only that anthropomorphic bias exists, but also that it can be overcome to allow better understanding and development of AI-enabled systems. This improved understanding would aid both the perceived trustworthiness of AI systems by human operators and the research and development of artificially intelligent military technology.
But the act of human recognition involves distinct cognitive steps occurring in coordination with one another, including visual processing and memory. A person can even choose to reason about the contents of an image in a way that has no direct relationship to the image itself yet makes sense for the purpose of target recognition. The result is a reliable judgment of what is seen even in novel scenarios.
For military personnel who are in training for the operation or development of AI-enabled military technology, recognizing this anthropomorphic bias and overcoming it is critical. This is best done through an engagement with cognitive science.
Even granting that existing AI approaches are not intended to be like human cognition, both anthropomorphizing and the misunderstandings about human intelligence it carries are prevalent enough across diverse audiences to merit explicit attention for an AI military education. Certain lessons from cognitive science are poised to be the tools with which this is done.
Vincent J. Carchidi is a Master of Political Science from Villanova University specializing in the intersection of technology and international affairs, with an interdisciplinary background in cognitive science. Some of his work has been published in AI & Society and the Human Rights Review.
These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.
If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.
In cognitive warfare, information is king. The PLA should expand databases and create libraries for cognitive warfare operations and tactics and update them accordingly and expand its cognitive warfare talent pool. It should also vigorously develop core technologies such as neural network systems and AI applications to create an interconnected media environment that effectively coordinates messaging and accelerates the linking of information with cognitive domain operations.
The thought of an intelligent machine or machine intelligence to have the ability to perform any projected warfare task without any human involvement and intervention -- using only the interaction of its embedded sensors, computer programming, and algorithms in the human environment and ecosystem -- is becoming a reality that cannot be ignored anymore.
When nations individually and collectively accelerate their efforts to gain a competitive advantage in science and technology, the further weaponization of AI is inevitable. Accordingly, there is a need to visualize what would an algorithmic war of tomorrow looks like, because building autonomous weapons systems is one thing but using them in algorithmic warfare with other nations and against other humans is another.
Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on autonomous weapons systems with Markus Wagner, a Published Author and Associate Professor of Law at the University of Wollongong based in Australia.
The Levatas software guides these autonomous systems to learn how to do the everyday, mundane operational tasks that keep manufacturers running. For example, they learn to read gauges and to inspect and report abnormal temperature changes. Over time, through the use of machine learning, cognitive machines get better and better at their jobs.
The company competes with other makers of robotic cognitive intelligence systems for a piece of the growing industry. Levatas boasts annual revenues at around $21 million. Competitors include makers of robotics inspection systems for smaller businesses, including Stradigi AI, ReadSense, MotionCloud and Visenze.
Drones are becoming increasingly popular for a variety of uses, including photography, videography, and delivery. But what if there was a way to automate drones so that they could be used without a human operator?
Drones are unmanned aircraft that are flown using a remote control or by following pre-programmed flight paths. Some drones are equipped with cameras and can be used for photography or video. Drones can also be fitted with sensors and used for mapping or monitoring crops or wildlife. Increasingly, drones are being equipped with artificial intelligence (AI) technology, which enables them to fly autonomously.
There are several advantages to using drones equipped with AI. First, it eliminates the need for a human operator, which can be costly and dangerous. Second, it allows the drone to fly for longer periods and cover larger areas than would be possible with a human operator. Finally, it enables the drone to make decisions on its own, such as avoiding obstacles or identifying targets.
AI-equipped drones are being used in a variety of applications, including search and rescue, agricultural mapping, and even delivery of goods. As the technology continues to develop, we will likely see an increasing number of drones flying autonomously in our skies. 041b061a72