Rajandeep Singh
Home
Projects
About
Contact
More
Check out what I work on!
I helped create the head-worn display (HWD) exhibition as part of the Contextual Computing Group led by Prof. Thad Starner. The exhibition's main goal was to teach attendees some of the biggest challenges of making HWDs for everyday wear. Attendees experienced live demos of the most popular HWDs of recent years and learned what the headsets did right and where challenges still remain. Attendees also received guided tours of one of the world’s largest personal collections of HWDs and learned how the field has transformed over the last 30 years. I along with my team created interactive demos on various headsets such as the Magic Leap, Hololens 2, Epson BT-30 and Google Glass EE2 to showcase the challenges previously mentioned. The exhibition was presented at ISWC 2022 and GVU Expo 2022 among other events. I along with my team have been continuously working on updating the exhibition and generally present it once every few months. To find the date and time for the next showcase, feel free to contact me. To learn more, click on the link below.
As part of my role as a student researcher at the contextual computing group, I have been tasked to conduct various studies that test certain applications of HWDs as well as find challenges in using them for everyday wear. These studies are intended to build towards publications. Two of the most resent studies have been on Remembrance Agent and Safety. Remembrance agents are conversational aids that provide just-in-time information. Subtle head-up displays (SHUDs) can be used as these conversational aides, coaching wearers with conversational topics or just-in-time information cued by speech transcribed by modern recognition systems. SHUDs are also becoming more difficult to distinguish from normal eyeglasses. However, would the use of such smartglasses break the rapport between conversants? Would such aides actually improve conversations? Our studies on remembrance agent aim to address these questions. With safety, our research aims to find the safest position in the visual field for using HWDs while walking. Participants wore a Microsoft Hololens 2 which will be used to simulate the different HWD positions in the visual field. Participants were tasked to read text on the HWD while walking around a path to the beat of a metronome to emulate a scenario of using HWD in daily life. We flashed the screen white and recorded the participant's read speed, natural walking speed, as well any missteps during the experiment.
Augmented Reality for Visually Impaired People uses Augmented Reality devices (the Microsoft Hololens) and a combination of spatial audio clues and speech sounds to deliver information about a user's surroundings. The project delivers spatial audio pings so users can understand where walls and obstacles are, as well as reads out any text in the user's environment. Our team conducted a user study with 7 blind users and found promising results. I worked on AR for VIPs during my semester abroad at UC Berkeley as part of the Extended Reality @ Berkeley group. My main responsibilities were object sonification, which involved automatically breaking down the Mesh generated by the Hololens to distinguish obstacles, and text recognition. The project was presented at the Microsoft Reactor in San Francisco and I later presented it as a poster at India HCI 2019. A demo video and the link to the project site can be found below. You can also check out my poster for India HCI here: https://bit.ly/AR4VIPS
ScholAR is an ongoing project at IMXD Lab, IIT Bombay focused on designing AR learning experiences for middle school students in rural India. ScholAR aims to encourage collaboration in students and improve their visualization and spatial conceptualization. The project consists of two applications, a teacher-side app and a student-side app. The teacher-side app helps the teacher monitor and interact with the entire class while the student-side host the education modules. I worked on ScholAR for 6 months as a research intern at IMXD Lab. I mainly worked on networking and development of the teacher-side application. You can learn more about ScholAR at the link below.
This project aimed to highlight performance differences between two very different GPGPU (General-Purpose computing on Graphics Processing Units) techniques, CUDA and Compute Shaders (Unity). These techniques are used widely in the video game industry for paralleling computational tasks that can easily be divided into smaller tasks. We conducted simple N-particle simulations as out benchmark to depict the performance difference of the two approaches. This white paper was written for UC Berkeley's CS267 Applications of Parallel Computers. The complete white paper can be found on the link below.
We implemented a communication system using BCI targeted for people suffering from partial paralysis or brain damage. It is aimed to be an independent communication system that allows users to control a speller and a drawing scene using a BCI and gyroscopes. We then created a CNN to predict user's drawings and used text-to-speech to describe what the user drew. We also experimented with using Virtual Reality to create a controlled environment for recording the EEG signals. We then analyzed both the recorded signals and checked if the use of virtual reality results in less noisy data corresponding to less amount of spontaneous EEG. This research project was my major/capstone project for my Bachelors degree and culminated in a peer-reviewed publication in an international journal. You can read the complete paper on the link below.
Minute Magic is an AR prototyping tool for the Magic Leap that enables non-coders to easily create XR experinces. We realized that a 3D prototyping app will also allow experienced XR devs to quickly try ideas and let users test them since so much of XR development is about finding mechanics that feel great in 3D space. Minute Magic can be broken down into 3 sections. First, it allows users to create 3D objects using primitives. These primitives can have different colors, scales, positions, and rotations that the user can define. The second section of the app allows you to add attributes to these objects, such as animations, convert them to spawners that eject other objects, and modify collision properties. One of the attributes that we demoed allowed users to define keyframes. The third section of the app allows you to play with what you created! In our demo, this involved destroying your creations as they move around them! I created Minute Magic as part of MIT Reality Hack 2022 with Anna Brewer (who also edited the video below). The project was used as a foundation by a CS 7470 Mobile and Ubiquitous Computing research team at Georgia Tech that I mentored.
RICognEyes was a prototype wearable device that narrates the objects in a person's surroundings. The device is intended to help visually impaired people to recognize their surroundings. It also narrates text and is operated using voice commands. The device was build using Snapdragon DragonBoard and uses Oracle Cloud. This project was built in less than 2 days at LAHacks 2019 at UCLA. It was 5 awards including Best Envision Hack, Best Social Impact Hack, Social Impact Prize, Best IoT Hack, and Best Hack using Oracle Cloud. You can find more about the project on the Devpost link below.
Mindfit is a virtual reality environment in which the user can talk to a VR counselor. The environment changes based on the user's heartbeat (extracted from a Fitbit), in an attempt to implement Biofeedback to allow the user to have better control over their vitals. The heart rate data is also displayed on a web dashboard, for the user's human therapist/counselor to overview. The project was built at SFHacks by a faur member team and won the Best Use of Google Cloud Platform award. "RICognEyes was a prototype wearable device that narrates the objects in a person's surroundings. The device is intended to help visually impaired people to recognize their surroundings. It also narrates text and is operated using voice commands. The device was build using Snapdragon DragonBoard and uses Oracle Cloud. This project was built in less than 2 days at LAHacks 2019 at UCLA. It was 5 awards including Best Envision Hack, Best Social Impact Hack, Social Impact Prize, Best IoT Hack, and Best Hack using Oracle Cloud. You can find more about the project on the Devpost link below.
AfRo was an app constructed in less than 24 hours at MMA Ideathon 2018 as part of Godrej's problem statement to construct an application that enabled the use of their fabric roll-on (FRO) mosquito repellant. The app gamified the entire process of applying the FRO and incentivized product purchases with in-game rewards. AfRo was one of the finalists at the hackathon.
During Summer 2022, I got the opportunity to work on the Vuforia Spatial Toolbox (VST) as an intern at the PTC Reality Lab. PTC's Reality Lab addresses cutting edge challenges in AR and spatial computing. Spatial Toolbox acts as a research platform using which we built spatial computing experiments. During the course of the internship I was tasked to lead research on Spatial Toolbox by building integration between it and Onshape. As Spatial Toolbox is web based, my work prompted me to learn full-stack web development as well as build web AR tools using three.js and node.js. My work at the Reality Lab gave me an opportunity to use my research skills in the industry. Use the link below to learn more about Spatial Toolbox.
Avontus Viewer is a 3D model viewer for desktop, Android, iOS and Hololens. The application visualizes CAD drawings created in Avontus Designer and makes them easy to share and understand. The application allows users to view these models in a 3D view, AR, VR as well as an exported YouTube video. I was the project lead for Avontus Viewer during my time at Avontus. I worked on 3D rendering, enhancing the AR and VR features, communication with other proprietary applications, as well as an "Upload to YouTube" feature for Viewer. My role at Avontus was a very unique one, as I was working on both the product development and bits of UX for all the projects I was involved in. I learned a lot about desktop development, working with CAD data, graphics and production code thanks to Viewer. You can check out the link below if you want to learn more about it.
Avontus Designer is a CAD software for scaffolding that I worked on during my time at Avontus. Designer allows users to create geometrically complex CAD drawings using various tools and primitives. It has in-built 3D viewers and allows for exporting drawing to other Avontus softwares or file formats.
ReWIVE presented a possible alternative to the current 3 DOF mobile VR. It leveraged SLAM from ARCore to provide the user with complete 6 degree-of-freedom tracking in a mobile VR environment. The video on the bottom shows a game (in which the player has to dodge spheres) that was set up to demonstrate how the user's position can contribute to an experience. I also worked on a social VR version of this project called MetaVS ( https://www.metavs.space/ ). When I started working on this project, I didn't have access to a VR headset and I really wanted to work on 6DOF VR projects. So I was inspired to find a way to implement 6DOF tracking on mobile.
Touchless used the Leap Motion to allow intuitive gesture-based controls for desktop navigation. The project was build in Python and can be configured for any screen. It had 4 basic controls: Cursor control, touch to select, swipe to scroll and pinch to close applications. This project was inspired by the lack of affordable touch screen monitors.I also felt it would be really handy to have gesture based shortcuts programmed to my needs.
I created my own version of Leap Motion's Project NorthStar. It was built in an attempt to find cheap alternative AR headsets for research work. The headset is made out of 3D printed parts, Leap Motion and Rasberry Pi screens. It also provides complete hand tracking in AR. You can learn about the original North Star at the link below.
This project is a Unity plugin I created to link OpenCV libraries to the Unity game engine. The plugin allows full positional tracking of a user's face. I created it to use face tracking in AR experiences and video games. This seemed like something that a lot of people wanted as my YouTube video for this reached 1000+ views without any SEO. It is an open-source project and you can check it out at the link below.
Jack is a re-imagined 2D platformer that was designed around the idea of controlling platforms instead of the player. The result was a game that revolved around navigating levels with the help of user-generated platforms. The demo consisted of 3 levels. The gameplay footage below shows a part of the final level. Check out the link below for the full YouTube video.
VuforiaWire is an of the first projects I ever made when I started working on augmented reality and 3D modeling. I was very much interested in tangible AR as it opens up the possibility for more intuitive user interactions. The game is essentially an AR version of the buzz wire game. The user hold controls a hoop that is anchored to a piece of paper, while the wire itself is anchored to a different image.
I created this game as part of a Unity Engine course that I undertook in my first year of undergraduate. You the player, take the role of a space frog and go up against an endless army of robots on proceduraly generated levels. This game was my first experience working with particle effects, object-pooling, advanced Unity UI, and procedural level generation.