The First Workshop on Photorealistic Image and Environment Synthesis for Multimedia Experiments (PIES-ME)

Photorealistic media aim to faithfully represent the world, creating an experience which is perceptually indistinguishable from a real world experience. Current standard media applications fall short from this goal since acquisition and production technologies in the consumer applications do not capture/produce enough of the world’s visual, audio, spatial, and temporal information to faithfully represent it. In the last years, however, the area of photorealistic media has seen a lot of activity, with new multimedia areas emerging, such as light fields, point clouds, ultra-high definition, high frame rate, high dynamic range imaging, and novel 3D audio and sound field technologies. The combination of these technologies can certainly contribute to pave the way for hyper-realistic media experience. But first, we have to overcome several technological challenges. It is worth pointing out that research in this area requires the use of big datasets, software tools, and powerful infrastructures. Among these, the availability of meaningful datasets, with a diverse and high-quality content, is of significant importance.

In recent years, the number of vision-based datasets has quickly grown. Some of these datasets provide photorealistic image sequences created by physically capturing real-world environments which are limited to one or two types of images (e.g., monocular and depth) and small sets of images. To address the limitations of capturing photorealistic datasets of real-world environments, researchers have begun to render image sequences of synthesized virtual environments which allow for more types of images (e.g., monocular, stereoscopic, depth, and semantic) and often include much larger sets of images. However, many of these synthetic datasets are not photorealistic due to relying on lower-fidelity virtual objects and/or rasterizationbased rendering techniques. 360° VR datasets have also grown, but most include 360° videos captured by diverse camera hardware and curated from various Internet sources, with varying resolutions, content, and camera motions. In summary, most available datasets are limited and do not provide researchers adequate tools to advance the area of photorealistic applications.

The goal of this workshop is to engage experts and researchers on the synthesis of photorealistic images and/or virtual environments, particularly in the form of public datasets, software tools, or infrastructures, for multimedia research. Such public datasets, software tools, and infrastructures will lower entry barriers by enabling researchers that lack expensive hardware (e.g., complex camera systems, smart glasses, robots, autonomous vehicles) to simulate and create datasets representative of such hardware and various scenarios. Photorealistic image and environment synthesis can benefit multiple research areas in addition to multimedia systems, such as machine learning, robotics, computer vision, mixed reality, and virtual reality.

Important Dates

  • Paper Submission: July 22, 2022, 11:59 pm Anywhere on Earth (AoE)
  • Notification of Acceptance: August 7, 2022 August 10, 2022
  • Camera-ready version: August 21, 2022, 11:59 pm EDT August 14, 2022, 11:59 pm EDT

Supporters