Lessons Learned: Developing for Virtual Reality
Virtual Reality is steadily becoming more accessible to the average consumer. With users demanding more immersive experiences and market options becoming more varied, the VR industry is likely to grow considerably in the near future; according to Deloitte, it will reach 1 billion USD by the end of this year.
And it’s not just gamers who are benefiting from the immersive possibilities it offers. VR experiences are being explored and created in education, entertainment, music, shopping, and more.
Recently, we jumped on the opportunity to develop immersive VR content for the Samsung Gear VR, using Samsung’s 360 camera which hasn’t yet hit the market. The result is ClearVR, a mobile application that lets users explore the features, pricing, interiors, and exteriors of listed vehicles. In this article, we give a retrospective view of the challenges, resolutions, and lessons we learned along the way.
Our Experience Developing For VR
First, we took a look at Samsung’s framework and existing demos to scope out the gaps in previous VR content. We wrote up a rough outline of what we wanted to achieve by comparing other demos to see how they were structured and where we would improve. Most importantly, we wanted our VR application to have less lag for high-quality content. Other product demos were fragmented and didn’t give the immersive VR effect that we were aiming for. Other apps lagged and used faux 3D objects by stringing multiple images together. We used Samsung’s 360 camera and 3D models with the Gear VR headset to create more compelling visuals which are essential in creating a truly immersive experience.
- There is essentially a non-existent online community such as Stack Overflow and very limited online documentation to guide us in the right direction including forums, Q&As, etc. This encouraged us to look through Samsung’s framework to see how they structured their code.
- We also found it difficult to find high-quality 360 degree (Hdri) images online.
- The car models were pixelated when wearing the headset due to low pixel density on the phone and lack of anti-aliasing ability.
- Limiting model quality and quantity to preserve performance.
- When we uploaded the car models, they would lag because of the amount of detail the car model contained. This took up a lot more memory on the device than we originally anticipated.
- We should have allotted more time dedicated to the architecture of the app before development. Due to deadlines and lack of experience with the framework, areas that required more focus were overlooked (refactoring is needed to improve efficiency).
- Merge conflicts arose when we merged between our different branches. The problem here was that our code was diverging too much that when large changes were made to a file, it was difficult to merge into dev.
- Smudge filter error (SourceTree repeatedly prompts the user to enter Github password when large files were committed to the current version, and doesn’t pull successfully) (“error: external filter git-lfs smudge %f failed 2 error: external filter git-lfs smudge %f failed”).
- There were also long Gradle build times.
- We needed to detect where the cursor was pointing and focus it, whether it’s aimed at the car or the background. We placed a bounding box around the car as a hotspot for the cursor rather than a mesh of the car so that the hotspot would be easier to detect.
- To speed up Gradle, we added “org.gradle.jvmargs=-Xmx2048m” to gradle.properties to make the process more efficient. This increased the memory usage and speed of building the project.
- We discovered that the framework allows for two methods of loading the models: One way, where all the parts are loaded separately, which isn’t great for the developer who maintains and updates the app because they would have to buy the car model, take it apart, save it, and put it in certain files. The other way is uploading the car model as one part. If you want more detail, it’s better to use a split uploading method, so that adding features and details will be easier to work with, such as custom shaders for glass transparency. When loading the car models, we used one file using GVRContext.loadModel() instead of loading mesh and separately uploading material.
- In the Gear VR Framework, GVRContext.loadMesh() only loads one mesh in the given object file and discards the rest. This was resolved by adding a new method in the framework to load all meshes.
- We stored the model’s meshes in Android’s LRU cache rather than using weak references to speed up loading times and enhance the user experience.
1. Choose the right framework: Select a framework that has a lot of supportive documentation and a strong online community where solutions can easily be found.
2. Know the framework well: It’s important to become familiar with the framework you’re working with before you begin developing. Typically, developers will look through documentation or online resources such as Stack Overflow to find an answer for a problem. Developers must take scalability and maintainability into consideration in order to save time and frustration. Study the framework and learn how to use it to your best advantage. If the wrong data structure is chosen, you risk making your project much more complex.
3. Experiment and conduct research: Since virtual reality is still new to a lot of developers, it’s crucial to examine other products by conducting product demos with your team before starting the project. Before development, we looked at existing demos to determine their limitations, structure, and the components used. This gave us an idea of what we needed while keeping scalability in mind.
4. Resources: Always consider the resources that you’ll use before you start developing. We used resources from Blender and TurboSquid, only to realize after that the model we chose was too complex for our device to handle. Be aware of your budget and gather resources accordingly.
5. Preparation and planning: We didn’t have a set architectural plan at the beginning of the development process and as we progressed with the project, we diverged from our original idea. As new information was discovered part-way through development, we changed the way car data was stored by merging the previously separated components of the car into one component.
As a team, we needed to make a decision to either build the app natively versus building with the Unity/Unreal Engine. The latter would have presented different challenges, but more resources would be available for faster and more efficient development. After weighing our options, we decided to work with Samsung’s framework.
As costs are beginning to decrease and components are becoming more affordable, more virtual reality content is being developed for users worldwide. At the beginning of 2016, there were over 185 apps available for the Gear VR on the Oculus Store alone. Virtual reality has the power to show the user rather than describing the experience to them which is why VR makes a powerful medium that is changing the entertainment industry. The future of VR holds a lot of promise for users and software developers worldwide and it’s an opportunity that has a great deal of possibilities in the near future.
As a full service custom mobile app development company, Clearbridge Mobile handles the entire lifecycle of your product from Planning and Strategy, UX/UI Design, App Development, QA/User Acceptance Testing, to Technical Delivery. We use a unique agile development process that gives you control over scope, reduces your risk, and provides you predictable velocity. Start a conversation today to get started on your mobile project.