Do you want to offer more colorful images to your clients and let your creativity run free without getting caught up in technical constraints? No need to invest in expensive and cumbersome equipment if you are a beginner in photography and do not necessarily have a large studio. No training to pay either. Relight is a very easy-to-use application, I promise!
Another advantage is that you don't have to keep your client waiting for an hour to set up and try to find the best color (especially if she's pregnant, like most of my clients!)
I will show you the picture I used, then what it looks like when you add a colored light. You can change the color and intensity of the light as well.
The lighting is of paramount importance when creating a photograph. It not only makes an image darker or brighter, but also drastically changes the mood and the emotion in the image. Position, color, and instensity of the light sources have a significant impact on how the subject in the image looks. For instance, in a portrait image, mixing multiple light sources having different colors can make the image more vibrant, whereas the image might look dull and unpleasant if the lighting condition is bad.
Ambient light enlightens each pixel equally. This light source is particularly useful to make the image darker (or brighter). Its color can be changed as well.
To modify the ambient light, click on Ambient button (right below the image) in the editor. To make the image brighter or darker move the slider to the right or to the left, respectively. You can change the ambient light color in real time by clicking on the color picker.
See below how drastic changes you can make by just playing with a slider and a color picker 🙂
Once placed over the image, it enlightens the pixels around. Its intensity gets smaller as you go away from the light source. The closest pixels get the maximum light, and the furthest pixels get the minimum light.
To add a new bulb simply click on + New Light button. To remove it, click on the trash bin icon. The eye icon on top-right hides / displays the lighting bulbs. The lighting-bulb icon next to the eye icon allows you to toggle between the original and the relighted images.
Each lighting bulb has four main parameters: color, intensity, distance, and radius. You can easily change the color by playing with the color picker. You can make the light source more or less intense by moving the slider next to the color picker.
The bottom-right slider determines the light radius. A bulb with a larger radius enlightens a larger area, whereas the one with a small radius illuminates a small part.
In order to make the enlightment physically correct, we perform instant 3D reconstruction. We allow the lighting bulbs to move in 3D space. You can move each lighting bulb to the left, to the right, towards and away from the image.
The app enlarges the circle around the bulb, as you move it from the image.
If you move the bulb towards the image, it might go inside the model. In this case, we make the part of the circle that is inside the model transparent. If the bulb is completely inside the model, the app makes it fully transparent.
A mix of two lights (maybe one on the left and another one on the right) oftentimes results in cool effects. My another favourite trick is reducing the ambient light, placing a single light bulb over the place I want to enlighten and reducing its radius.
Keeping / Removing the Background
You can keep or remove the background as well. If you opt for removing the background, it is automatically done by ClipDrop Background Remover. You can also choose the background to receive light or not. If you prefer to remove the background, you can choose its color.
Do not hesitate to relight your portrait by our app using different light and background colors.
Apply Studio Lights Only to the Background
One way to achieve this is to drop the distance for each light source close to zero. By doing so, we move the light sources towards the image.
You can get the cool 2 photos in the middle by placing a light bulb (or bulbs) very close to the image. Just set its distance to minimum (or close to the minimum value).
We have recently introduced 💡 our image relighting AI application 💡 allowing you to apply professional lights to your images 📸 in real time ⚡. With this app, you can turn your image into a magic with no background in professional digital photography.
In this blog, we will be explaining the technical details behind the app.
In order to add new light sources and relight the image in a visually appealing way, the illumination needs to be physically (almost) correct. For example, the image below (obtained by our relight app) shows how the image should look when we add a light source on the right. As can be seen, one half of the face is darker, since the other half blocks the light rays coming from the new light source.
One way to achieve the physically correct relighting is to make use of depth maps and surface normals.
What are Depth and Surface Normal Maps?
The depth map is a gray-scale image that contains the information of how far or how near each pixel in the image is. The middle image below shows the depth map predicted by ClipDrop Depth Estimation Model, where bright and dark pixels respectively represent near and far points in the image.
The surface normal map is an image that encodes the normal vector of each pixel in R (red), G (green), B (blue) channels. The surface normals are crucial to determine how the image pixels should to be enlightened. For example, an image pixel having a normal vector parallel to the light ray receives no light, whereas a pixel with a normal vector perpendicular to the light ray receives strong light. More details on normal mapping can be found on the wikipedia page. Below you can see the input image and the surface normal map predicted by Clipdrop Surface Normal Estimation Model.
Once highly accurate depth and surface normal maps are predicted, they can be used to compute the illumination by common reflection models such as Lambertian or Phong.
Sounds Cool, How to Predict Depth and Surface Normal Maps?
The main challenge here is predicting high quality depth and surface normal maps from a single image, which is referred to as monocular depth & normal estimation in the literature.
A line of great research has been recently conducted to tackle these specific problems. MiDaS and its variant are the two state-of-the-art methods for monocular depth estimation. EPFL-VILAB has also recently introduced their monocular depth and normal prediction models trained on their omnidata dataset.
One way for depth & normals prediction is to use the existing methods.
How About Synthetic Data?
While manual annotation for certain fundamental machine learning / computer vision tasks such as image classification, segmentation, etc. is relatively easy, collecting highly accurate annotations for very specific tasks like monocular depth and normal estimation is a big hassle.
Synthetic data is another alternative for training machine learning models, especially when tackling the problems for which collecting annotations is extremely demanding. At ClipDrop, we extensively use synthetic data to solve challenging AI problems.
We have recently built a human dataset containing thousands of diverse human models with a high variety of clothes, poses, body types, facial expressions, facial & body hair, and many more. We also have hundreds of both indoor and outdoor environments with different lighting and weather conditions. With our custom data generation pipeline, we rendered person images as well as their masks and depth & surface normal maps.