Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

I used an app to create 3D models with my iPhone, and it’s shockingly great

The pace of innovation in artificial intelligence image generation is phenomenal. One company — Luma Labs — provides an excellent example of a practical, yet hugely entertaining use of the latest technology applied to 3D images.

Luma AI is in beta testing on the iPhone and eventually will be made available on Android as well. I got into the beta test group and can share some information about what this amazing app does and how easy it is to get incredible results.

What is Luma AI?

Alan Truly captures a 3D model of a figurine with an iPhone 13 Pro Max
Photo by Tracey Truly

Luma AI is an app and a service developed by Luma Labs. It captures three-dimensional images using a technique known as Neural Radiance Fields (NeRF). It’s similar to the ray-tracing technique that makes the graphics in high-end gaming look so realistic.

NeRFs have been around for a few years now but have existed primarily in research facilities until very recently. With the explosion of AI image generation, headlined by photorealistic Dall-E renderings, NeRFs are beginning to be explored by a much broader audience. The first wave of new NeRF software required some developer skills and installing software packages from GitHub, then training the A.I. on a set of photos. It was a bit much for the average person.

Luma Labs is about to make the process dramatically simpler with its Luma AI app. From start to finish, the entire process can be managed from an iPhone, and the end result is more accessible as well.

Luma AI iPhone compatibility

Someone holding the iPhone 14 Pro Max.
Joe Maring/Digital Trends / Digital Trends

Since Apple made a point of demonstrating the 3D depth measuring capabilities of LiDAR sensors, you might expect Luma AI to require the more expensive iPhone 14 Pro or iPhone 14 Pro Max to capture 3D models. However, the clever developers at Luma Labs use artificial intelligence instead. That makes this technology compatible with iPhones as old as the iPhone 11.

In the future, the app will become available on Android and there’s already a web version in beta testing as well. In an interview, Luma Labs CEO Amit Jain said the iPhone app is expected to be ready for public release in a few weeks.

How to use Luma AI

The rear cameras on the iPhone 14 Pro Max.
Joe Maring / Digital Trends

To use Luma AI, you simply circle slowly around an object at three different heights. An AR overlay guides you through the process, which takes a few minutes and becomes easier after a few tries as you get familiar with the process. Before long, you’ll be able to capture a medium-sized object like a chair in a couple of minutes.

Any size object can be handled because, to Luma AI, it’s just a series of images — no matter how big the subject is. If you circle a cup, a statue, or a building, the general idea remains the same.

The app will let you know when it has enough images, and when that happens, a Finish button will appear. You can also keep circling and filling in gaps in the AR cloud of rings and rectangles that represent the photos taken so far. The app will automatically stop the capture when an ideal amount of photos have been collected. There’s also a freeform mode that lets you capture even more photos, at different angles and distances. You can see the process in the YouTube video I created below. It’s an iPhone app, so it’s a portrait video.

Luma AI beta demo for Digital Trends

Processing is the next step, which happens on Luma Labs’ servers. After an hour or so, the finished NeRF will be available in the app in several different forms. The first view given is a generated video, showing a fly-by of the object in its natural environment. An interactive version is next and lets you spin the view by dragging a finger or a mouse across the image.

Most impressive of all, the subject of the capture, extracted from the background, is also available. With this representation, you can pivot the 3D object on any axis and zoom in to see it more closely. The sharpness depends on how many images were collected and how slow and stable you were during the capture process.

Getting better all the time

Luma Labs is updating the app and service at a remarkable pace. Within a week of receiving the beta test invitation, two powerful new features were added that expand the possibilities greatly. The first is a web upload option that allows you to capture video without the app, then upload it to Luma Labs website for processing. The results appear online and in the app.

This means it’s possible to use any of the iPhone’s camera modes, capture video with a dedicated camera, or even record video with AR glasses like Ray-Ban Stories. For example, a drone video becomes even more epic when you can smooth the motion and change direction after you’ve already landed. Luma Labs shared a good example showing an aerial view of autumn leaves in this tweet.

Fall in Palo Alto is gorgeous! 🍂 https://t.co/EwNkiv0DQV pic.twitter.com/hdd7iBLYgV

— Luma AI (@LumaLabsAI) October 22, 2022

The other new feature opens up 3D editing, painting, and 3D printing opportunities. The 3D meshes can be exported with textures in OBJ or GLTF format. They aren’t optimized but can be viewed with textures intact even with an online viewer such as the free, open-source website Online3DViewer.

A Luma AI capture of an art figurine is being refined in MeshLab.
Sprout Sprite Fairy Figurine

It’s also possible to open the 3D files in a mesh editor like the free, open-source MeshLab to delete any stray artifacts that appear as floating blobs, as well as clean up, and simplify the model before exporting in a variety of formats. The figurine featured above is about three inches tall and was sculpted by my wife, Tracey, for her business, ALittleCharacter. Luma AI captured a remarkable amount of detail in the sculpture and the log that it was resting upon. The log could have been selected and removed by MeshLab as well.

The highs and lows of 3D scanning

Kyle Brussell shared a dessert display from a party, mentioning he asked the adults to wait for their treats so he could capture it as a digital diorama.

Used @LumaLabsAI at a birthday party last night, made a bunch of adults not eat dessert so I could circle the table with my phone to make a 3D AI dream of the setup like a very cool person pic.twitter.com/sP0vVPB3yx

— Kyle Russell (@kylebrussell) October 30, 2022

Although Luma AI can process video, it relies on still images to construct a three-dimensional scene. That means if the subject moves, it might reduce the quality or clarity of the capture. A 3D image of a person who is seated, as shown in Albert Bozesan’s Tweet, will be good. In the same tweet, the second capture of a sculpture shows what happens when there’s movement within the scene. The background shows people that walked near the subject as distorted shapes.

Took two @LumaLabsAI #NeRFs by a Bavarian lake today. Great way to capture memories, feels like Minority Report. #Tegernsee pic.twitter.com/HLC0ekF7uD

— Albert Bozesan (@AlbertBozesan) October 30, 2022

Luma AI price and availability

Luma AI is currently in beta testing, and invitations are periodically given via the company’s Twitter account. If you have a compatible iPhone and an interest in this technology, you might be able to get early access. There’s also a waitlist on the Luma Labs’ website.

Luma Labs CEO Jain indicated that pricing is yet to be determined and depends upon how broad the user base turns out to be and how the results of the scans are being used. Based on these statements, there might be a professional subscription with more advanced features and a personal subscription for less. For the time being, it will remain free to use.

Editors' Recommendations

Alan Truly
Alan is a Computing Writer living in Nova Scotia, Canada. A tech-enthusiast since his youth, Alan stays current on what is…
Apple’s AI plans for the iPhone just leaked. Here’s everything we know
The back of a Natural Titanium iPhone 15 Pro Max.

Apple is the only major name in the world of Big Tech that hasn’t made its ambitious AI plans public yet. But that will change in a few weeks, with a focus on reimagining the iPhone experience. Bloomberg, citing internal sources, has detailed how Apple plans to integrate generative AI experiences with iOS 18, the next major build of its iPhone operating system.

The company plans to push new AI-powered capabilities not just in such in-house apps as Safari and Maps, but also in experiences like the notification system and a supercharged Spotlight search. Notably, Apple will push the bulk of AI processing to the iPhone’s silicon, and only a minor portion of it will be pushed to the cloud.

Read more
Something important just happened to the iPhone 16 series
iPhone 16 Pro and iPhone 16 Pro Max larger displays.

iPhone 16 Pro and iPhone 16 Pro Max renders MacRumors

With  the calendar about to turn to June, attention on the upcoming iPhone 16 series will soon shift into an even higher gear. Along those lines, word is that production on a critical component for at least three of these phones is about to begin.

Read more
Can a $500 Pixel phone beat a $1,000 iPhone in a camera test? I found out
iPhone 15 Pro (left) and Google Pixel 8a camera modules.

Right before Google I/O 2024, Google showed off the latest Pixel device, the Google Pixel 8a. This is the latest offering from the Pixel A-series, which is a more budget-friendly Pixel for those who don’t need all the bells and whistles of the flagship Pixel 8 or 8 Pro.

The Pixel 8a features a new design with more rounded corners and a matte-finish back. It packs Google’s latest silicon, the Tensor G3, but the camera hardware remains unchanged from its predecessor, the Pixel 7a.

Read more