Skip to main content

No image processing
in the queue for now

3 posts tagged with "image"

View All Tags

· 6 min read
Onur Tasar

Image Relight Poster

ClipDrop Image Relight

Today we are launching our 💡image relighting AI application💡. The app allows you to apply professional lights to your images 📸 in real time ⚡.


The features include:

  • adding new light sources
  • changing their color, intensity, and position
  • making the image brighter (or darker)
  • removing the background (Thanks to ClipDrop Background Remover)
  • relighting only foreground object or only the background
  • relighting the whole image
  • and many more!

With the app, you can transform your photos into extraordinary visuals. 🪄 💫 ✨

Relighting an Image

The lighting is of paramount importance when creating a photograph. It not only makes an image darker or brighter, but also drastically changes the mood and the emotion in the image. Position, color, and instensity of the light sources have a significant impact on how the subject in the image looks. For instance, in a portrait image, mixing multiple light sources having different colors can make the image more vibrant, whereas the image might look dull and unpleasant if the lighting condition is bad.

Here is another example relighted image by our image relighting AI application.

Let's Relight Portrait Images

In this section, we will be demonstrating step by step how you can relight your portrait by ClipDrop Image Relight application. We will be using several portrait images from unsplash.


Ambient light


Ambient light enlightens each pixel equally. This light source is particularly useful to make the image darker (or brighter). Its color can be changed as well.

To modify the ambient light, click on Ambient button (right below the image) in the editor. To make the image brighter or darker move the slider to the right or to the left, respectively. You can change the ambient light color in real time by clicking on the color picker.

See below how drastic changes you can make by just playing with a slider and a color picker 🙂


Lighting Bulb


Once placed over the image, it enlightens the pixels around. Its intensity gets smaller as you go away from the light source. The closest pixels get the maximum light, and the furthest pixels get the minimum light.

To add a new bulb simply click on + New Light button. To remove it, click on the trash bin icon. The eye icon on top-right hides / displays the lighting bulbs. The lighting-bulb icon next to the eye icon allows you to toggle between the original and the relighted images.

Each lighting bulb has four main parameters: color, intensity, distance, and radius. You can easily change the color by playing with the color picker. You can make the light source more or less intense by moving the slider next to the color picker.

The bottom-right slider determines the light radius. A bulb with a larger radius enlightens a larger area, whereas the one with a small radius illuminates a small part.

In order to make the enlightment physically correct, we perform instant 3D reconstruction. We allow the lighting bulbs to move in 3D space. You can move each lighting bulb to the left, to the right, towards and away from the image.

The app enlarges the circle around the bulb, as you move it from the image.

If you move the bulb towards the image, it might go inside the model. In this case, we make the part of the circle that is inside the model transparent. If the bulb is completely inside the model, the app makes it fully transparent.

A mix of two lights (maybe one on the left and another one on the right) oftentimes results in cool effects. My another favourite trick is reducing the ambient light, placing a single light bulb over the place I want to enlighten and reducing its radius.


Keeping / Removing the Background


You can keep or remove the background as well. If you opt for removing the background, it is automatically done by ClipDrop Background Remover. You can also choose the background to receive light or not. If you prefer to remove the background, you can choose its color.

Do not hesitate to relight your portrait by our app using different light and background colors.


Apply Studio Lights Only to the Background


One way to achieve this is to drop the distance for each light source close to zero. By doing so, we move the light sources towards the image.

You can get the cool 2 photos in the middle by placing a light bulb (or bulbs) very close to the image. Just set its distance to minimum (or close to the minimum value).

Do We Reduce the Image Quality?

Do we make your image blurry? No.

However, we visualize the resized image in the app to be able to relight your image in real time. To obtain the relighted image with the highest quality, you need to download it.

Technicalities Behind the App

Would you like to learn the technical details about how we have built this app? Check out our other blog post where we explain the technicalities behind the app.

Try It Yourself for Free

Give it a try! It is free! No account or credit card is needed.

Do not hesitate to share with us any creative visuals you may have. Follow us on Twitter for the latest updates.

If you have any question, feel free to join our Slack Community or contact us.

· 6 min read
Onur Tasar

Image Relight Poster

ClipDrop Image Relighting App

We have recently introduced 💡 our image relighting AI application 💡 allowing you to apply professional lights to your images 📸 in real time ⚡. With this app, you can turn your image into a magic with no background in professional digital photography.

In this blog, we will be explaining the technical details behind the app.

The Key Components: Depth Map & Surface Normals

In order to add new light sources and relight the image in a visually appealing way, the illumination needs to be physically (almost) correct. For example, the image below (obtained by our relight app) shows how the image should look when we add a light source on the right. As can be seen, one half of the face is darker, since the other half blocks the light rays coming from the new light source.


One way to achieve the physically correct relighting is to make use of depth maps and surface normals.

What are Depth and Surface Normal Maps?


The depth map is a gray-scale image that contains the information of how far or how near each pixel in the image is. The middle image below shows the depth map predicted by ClipDrop Depth Estimation Model, where bright and dark pixels respectively represent near and far points in the image.

The surface normal map is an image that encodes the normal vector of each pixel in R (red), G (green), B (blue) channels. The surface normals are crucial to determine how the image pixels should to be enlightened. For example, an image pixel having a normal vector parallel to the light ray receives no light, whereas a pixel with a normal vector perpendicular to the light ray receives strong light. More details on normal mapping can be found on the wikipedia page. Below you can see the input image and the surface normal map predicted by Clipdrop Surface Normal Estimation Model.


Once highly accurate depth and surface normal maps are predicted, they can be used to compute the illumination by common reflection models such as Lambertian or Phong.

Sounds Cool, How to Predict Depth and Surface Normal Maps?


The main challenge here is predicting high quality depth and surface normal maps from a single image, which is referred to as monocular depth & normal estimation in the literature.

A line of great research has been recently conducted to tackle these specific problems. MiDaS and its variant are the two state-of-the-art methods for monocular depth estimation. EPFL-VILAB has also recently introduced their monocular depth and normal prediction models trained on their omnidata dataset.

One way for depth & normals prediction is to use the existing methods.

How About Synthetic Data?


While manual annotation for certain fundamental machine learning / computer vision tasks such as image classification, segmentation, etc. is relatively easy, collecting highly accurate annotations for very specific tasks like monocular depth and normal estimation is a big hassle.

Synthetic data is another alternative for training machine learning models, especially when tackling the problems for which collecting annotations is extremely demanding. At ClipDrop, we extensively use synthetic data to solve challenging AI problems.

We have recently built a human dataset containing thousands of diverse human models with a high variety of clothes, poses, body types, facial expressions, facial & body hair, and many more. We also have hundreds of both indoor and outdoor environments with different lighting and weather conditions. With our custom data generation pipeline, we rendered person images as well as their masks and depth & surface normal maps.

These are some examples from our dataset.

Comparisons

In this section, we compare our custom and highly optimized models trained on our synthetic set with the current state-of-the-art methods. We compare the models on some images from unsplash.

Depth Estimation


We compare our monocular depth estimation model with MiDaS. Here are the predictions by these two models:

Original image
MiDaS
ClipDrop AI

Surface Normal Prediction


We compare our custom surface normal estimation model with the model trained on Omnidata dataset.

Original image
OmniData
ClipDrop AI

Try Our Image Relighting App

Did you enjoy reading this blog? Check out our blog on our image relighting application.

Try our image religting application. It is free! No account or credit card is needed.

Do not hesitate to share with us any creative visuals you may have. Follow us on Twitter for the latest updates.

If you have any question, feel free to join our Slack Community or contact us.