Skip to main content

Microsoft shows how AI can make a construction site safer at Build 2017

microsoft workplace artificial intelligence
Image used with permission by copyright holder
Artificial intelligence is set to make a huge impact on many aspects of everyday life, and Microsoft wants to be at the forefront of this tech as it revolutionizes the workplace. At the Build conference in Seattle, Washington on May 10, the company showcased how AI might make a construction site safer and more productive.

The average construction site is already packed with cameras, and Microsoft is leveraging that fact via its visual recognition software. By associating camera feeds with information about objects and people, the company will offer a platform that allows businesses to monitor work as it happens, and enforce policies automatically.

An on-stage demo saw Microsoft’s director of commercial communications, Andrea Carl, walk through an implementation of the technology. The set-up combined Azure Stack, Azure Functions, Cognitive Services, and commodity cameras, running more than 27 million recognitions every second.

Carl located a jackhammer situated on a similar construction site using a simple written command submitted via a smartphone, specifically, “Where is a jackhammer.” The AI’s object recognition capabilities allowed it to instantly respond with a message indicating that a jackhammer was available on the site.

The platform is also able to monitor which employees are certified to use the piece of equipment, and who handled it most recently, by scanning faces as different people pick up the item. In the eventuality that an employee without the proper authorization picks up a particular piece of equipment, a violation notification will be distributed to the appropriate personnel.

Adding new employees to this system is a snap, as the platform is constantly taking photographs of people who are detected on-site — an administrator simply needs to identify which person is being brought on board from images that have been collected, and then add necessary details like their name and their authorizations. Since cameras are constantly monitoring who is on site, the system can distribute notifications when someone is there who shouldn’t be.

The system can even make sure that items on the site are being stored safely, by referring to tagged locations that are set up for individual tools. We saw a camera spot that a jackhammer had been left leaning against a workbench, rather than lying down, and automatically instruct an employee to remedy the situation.

Microsoft’s other examples of how this technology could be applied include a system for detecting chemical spills, and a method of detecting when hospital patients are out of their beds. Putting this kind of AI into practice requires a lot of infrastructure in terms of cameras, but it clearly offers some major benefits when the necessary hardware is in place.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more