In the field of generative AI, computer vision has made tremendous strides in recent years. Stable Diffusion has transformed content production in picture generation by offering free software to produce random high-fidelity RGB images from text prompts. This research suggests a Latent Diffusion Model for 3D (LDM3D) built upon Stable Diffusion v1.4. Unlike the previous model, figure 1 illustrates how LDM3D can produce depth maps and picture data from a given text prompt. Users may create full RGBD representations of text prompts, bringing them to life in vibrant and engrossing 360° perspectives. On a dataset of around 4 million tuples that included an RGB picture, depth map, and description, their LDM3D model was refined.
A portion of the LAION-400M dataset, a large image-caption dataset with more than 400 million image-caption pairings, was used to create this dataset. The DPT-Large depth estimation model, which offers extremely precise relative depth estimates for each pixel in an image, was utilized to create the depth maps used for fine-tuning. It was essential to employ correct depth maps to create 360° views that are realistic and immersive and allow users to experience their text prompts in great detail. Researchers from Intel Labs and Blockade Labs create on top of LDM3D develop DepthFusion, an application that leverages the started 2D RGB photos and depth maps to calculate a 360° projection using TouchDesigner, demonstrating the possibilities of LDM3D.
DepthFusion has the power to change how people interact with digital material completely. A flexible framework called TouchDesigner makes creating interactive and immersive multimedia experiences possible. Their program uses touchdesigner’s creative potential to produce fascinating 360° panoramas that vividly depict text prompts. With the help of DepthFusion, users may now experience their text prompts in a previously uns conceivable way, whether it be a description of a serene forest, a bustling cityscape, or a sci-fi universe. This technology can potentially revolutionize various sectors, including gaming, entertainment, design, and architecture.
They have made three different contributions overall. (1) They suggest LDM3D, a novel diffusion model that, given a text prompt, generates RGBD pictures (RGB images with matching depth maps). (2) They built DepthFusion, a program that uses RGBD photos produced by LDM3D to provide immersive 360°-view experiences. (3) They evaluate the effectiveness of their produced RGBD photos and 360-view immersive films through comprehensive studies. The study presents LDM3D, a cutting-edge diffusion model that produces RGBD visuals from text cues. They also built DepthFusion, a program that uses the produced RGBD pictures from TouchDesigner to provide immersive and interactive 360-view experiences to illustrate the possibilities of LDM3D further.
The findings of this study might fundamentally alter how people interact with digital material, transforming everything from entertainment and gaming to architecture and design. The contributions of this work open up new opportunities for multiview generative AI and computer vision research. They are interested in how this area will develop further and want the community to benefit from the work shown.