The Accenture - AWS Re:Invent conference in 2017 was an effort to showcase the possibilities of two industry-leading brands and their emerging technologies. In less than two months, we created a gesture-based, motion design game using AI and other AWS services.
AWS (Amazon Web Services) is the largest cloud computing service, accounting for approximately 34% of Iaas and Paas. A subsidiary of Amazon.com, AWS includes networking, cloud computing, database storage, analytics, management, application services, mobile, developer tools, and IoT services.
Accenture and AWS needed to demonstrate the power of AI, machine learning, IoT, and analytics for enhancing performance and improving resource and time management. Because of the environment, they needed a concept that would attract players inside of a crowded convention and keep them engaged long enough to get the full effect of the demonstration.
We began at the whiteboard in our Austin headquarters, where we created a digital 1-on-1 survival challenge set in sub-zero temperatures. Our motion graphics team breathed life into our sketches, and mapped out measurements for physical fabrication. Beautiful game visuals captured the player’s attention, adding in their gesture input for machine learning on the back end.
The game was brought to life through animation, engaging players in a full story arc and arresting visuals. Because the visuals were impacted by new information that was input by new users, the animation had to be responsive and flexible.
From 14-foot scaffolding to backgrounds that mirrored digital ones, the physical space brought the full experience to life. Our team at Design & Manufacturing took the digital models of the game installations and constructed them in our Austin shop. The immersive installations were designed to create a holistic environment for the experience and crafted for installation at Re:Invent in Las Vegas.
We set up AWS services to track user inputs and capture data sets for real-time analytics. The experience was built using UNITY, Amazon Machine Learning, ReKognition, Kinesis, and AI to detect players’ interactions as inputs for progressive learning. We designed a game to not only showcase the integration of these channels but also present the user with an egaging way to understand what the channels are capable of.
Leveraging AI, machine learning, and IoT along with gesture-based video game play highlighted the strengths of integration. The use of Leap Motion technologies and AV effects created a responsive environment where each player’s gestures impacted the game’s visuals. Every behavior and decision was processed, analyzed, and redistributed with Artificial Intelligence, informing better practices and strategies with each level.
Tasked with the challenge of creating a fully-immersive experience that showcased the potential of combining Artificial Intelligence, deep reinforcement learning, and cloud-computing with data management and visualization, we were able to transform their ask to an answer they could touch, feel, and experience.
The full scope of this project required leveraging almost every one of our internal capabilities as a team, and the outcome wouldn’t have been possible without a fully integrated team. Unencumbered by the complexities of multiple vendors or outsourcing, our five teams simultaneously designed, developed, and fabricated the project in under two months.