Aakash KT

About Aakash KT

Aakash KT, With an exceptional h-index of 2 and a recent h-index of 2 (since 2020), a distinguished researcher at International Institute of Information Technology, Hyderabad, specializes in the field of Computer Graphics, Computer Vision, Computational Photography, Generative Models.

His recent articles reflect a diverse array of research interests and contributions to the field:

Accelerating Hair Rendering by Learning High‐Order Scattered Radiance

Combining Resampled Importance and Projected Solid Angle Samplings for Many Area Light Rendering

Analytical & Neural approaches to Physically Based Rendering

Real-Time Rendering of Arbitrary Surface Geometries using Learnt Transfer✱

Bringing linearly transformed cosines to anisotropic GGX

PRTT: Precomputed Radiance Transfer Textures

Transfer Textures for Fast Precomputed Radiance Transfer

Appearance Editing with Free-viewpoint Neural Rendering

Aakash KT Information

University

International Institute of Information Technology, Hyderabad

Position

Ph.D @

Citations(all)

16

Citations(since 2020)

16

Cited By

3

hIndex(all)

2

hIndex(since 2020)

2

i10Index(all)

0

i10Index(since 2020)

0

Email

University Profile Page

International Institute of Information Technology, Hyderabad

Aakash KT Skills & Research Interests

Computer Graphics

Computer Vision

Computational Photography

Generative Models

Top articles of Aakash KT

Accelerating Hair Rendering by Learning High‐Order Scattered Radiance

Authors

Aakash KT,Adrian Jarabo,Carlos Aliaga,Matt Jen‐Yuan Chiang,Olivier Maury,Christophe Hery,PJ Narayanan,Giljoo Nam

Journal

Computer graphics forum

Published Date

2023/7

Efficiently and accurately rendering hair accounting for multiple scattering is a challenging open problem. Path tracing in hair takes long to converge while other techniques are either too approximate while still being computationally expensive or make assumptions about the scene. We present a technique to infer the higher order scattering in hair in constant time within the path tracing framework, while achieving better computational efficiency. Our method makes no assumptions about the scene and provides control over the renderer's bias & speedup. We achieve this by training a small multilayer perceptron (MLP) to learn the higher‐order radiance online, while rendering progresses. We describe how to robustly train this network and thoroughly analyze our resulting renderer's characteristics. We evaluate our method on various hairstyles and lighting conditions. We also compare our method against a recent …

Combining Resampled Importance and Projected Solid Angle Samplings for Many Area Light Rendering

Authors

Ishaan Nikhil Shah,Aakash KT,PJ Narayanan

Published Date

2023/11/28

Direct lighting from many area light sources is challenging due to variance from both choosing an important light and then a point on it. Resampled Importance Sampling (RIS) achieves low variance in such situations. However, it is limited to simple sampling strategies for its candidates. Specifically for area lights, we can improve the convergence of RIS by incorporating a better sampling strategy: Projected Solid Angle Sampling (ProjLTC). Naively combining RIS and ProjLTC improves equal sample convergence. However, it achieves little to no gain in equal time. We identify the core issue for the high run times and reformulate RIS for better integration with ProjLTC. Our method achieves better convergence and results in both equal sample and equal time. We evaluate our method on challenging scenes with varying numbers of area light sources and compare it to uniform sampling, RIS, and ProjLTC. In all cases …

Analytical & Neural approaches to Physically Based Rendering

Authors

Aakash Kt

Published Date

2023/11/28

Path tracing is ubiquitous for photorealistic rendering of various light transport phenomenon. At it’s core, path tracing involves the stochastic evaluation of complex & recursive integrals leading to high computational complexity. Research efforts have thus focused on accelerating path tracing either by improving the stochastic sampling process to achieve better convergence or by using approximate analytical evaluations for a restricted set of these integrals. Another interesting set of research efforts focus on the integration of neural networks within the rendering pipeline, where these networks partially replace stochastic sampling and approximate it’s converged result. The analytic and neural approaches are attractive from an acceleration point of view. Formulated properly & coupled with advances in hardware, these approaches can achieve much better convergence and eventually lead to real-time performance …

Real-Time Rendering of Arbitrary Surface Geometries using Learnt Transfer✱

Authors

Sirikonda Dhawal,Aakash Kt,PJ Narayanan

Published Date

2022/12/8

Precomputed Radiance Transfer (PRT) is widely used for real-time photorealistic effects. PRT disentangles the rendering equation into transfer and lighting, enabling their precomputation. Transfer accounts for the cosine-weighted visibility of points in the scene while lighting for emitted radiance from the environment. Prior art stored precomputed transfer in a tabulated manner, either in vertex or texture space. These values are fetched with interpolation at each point for shading. Vertex space methods require densely tessellated mesh vertices for high quality images. Texture space methods require non-overlapping and area-preserving UV mapping to be available. They also require a high-resolution texture to avoid rendering artifacts. In this paper, we propose a compact transfer representation that is learnt directly on scene geometry points. Specifically, we train a small multi-layer perceptron (MLP) to predict the …

Bringing linearly transformed cosines to anisotropic GGX

Authors

Aakash KT,Eric Heitz,Jonathan Dupuy,PJ Narayanan

Journal

Proceedings of the ACM on computer graphics and interactive techniques

Published Date

2022/5/4

Linearly Transformed Cosines (LTCs) are a family of distributions that are used for real-time area-light shading thanks to their analytic integration properties. Modern game engines use an LTC approximation of the ubiquitous GGX model, but currently this approximation only exists for isotropic GGX and thus anisotropic GGX is not supported. While the higher dimensionality presents a challenge in itself, we show that several additional problems arise when fitting, post-processing, storing, and interpolating LTCs in the anisotropic case. Each of these operations must be done carefully to avoid rendering artifacts. We find robust solutions for each operation by introducing and exploiting invariance properties of LTCs. As a result, we obtain a small 84 look-up table that provides a plausible and artifact-free LTC approximation to anisotropic GGX and brings it to real-time area-light shading.

PRTT: Precomputed Radiance Transfer Textures

Authors

Sirikonda Dhawal,Aakash KT,PJ Narayanan

Journal

arXiv preprint arXiv:2203.12399

Published Date

2022/3/23

Precomputed Radiance Transfer (PRT) can achieve high quality renders of glossy materials at real-time framerates. PRT involves precomputing a k-dimensional transfer vector of Spherical Harmonic (SH) coefficients at specific points for a scene. Most prior art precomputes transfer at vertices of the mesh and interpolates color for interior points. They require finer mesh tessellations for high quality renderings. In this paper, we explore and present the use of textures for storing transfer. Using transfer textures decouples mesh resolution from transfer storage and sampling which is useful especially for glossy renders. We further demonstrate glossy inter-reflections by precomputing additional textures. We thoroughly discuss practical aspects of transfer textures and analyze their performance in real-time rendering applications. We show equivalent or higher render quality and FPS and demonstrate results on several challenging scenes.

Transfer Textures for Fast Precomputed Radiance Transfer

Authors

Sirikonda Dhawal,KT Aakash,PJ Narayanan

Published Date

2022

Abstract Precomputed Radiance Transfer (PRT) can achieve high-quality renders of glossy materials at real-time framerates. PRT involves precomputing a k-dimensional transfer vector or ak× k-matrix of Spherical Harmonic (SH) coefficients at specific points for a scene depending on whether the material is diffuse or glossy respectively. Most prior art precomputes values at vertices of the mesh and interpolates color for interior points. They require finer mesh tessellations for high-quality renders. In this work, we introduce transfer textures for decoupling mesh resolution from transfer storage and sampling specifically benefiting the glossy renders. Dense sampling of the transfer is possible on the fragment shader while rendering with the use of transfer textures for both diffuse as well as glossy materials, even with a low tessellation. This simultaneously provides high render quality and frame rates.

Appearance Editing with Free-viewpoint Neural Rendering

Authors

Pulkit Gera,Aakash KT,Dhawal Sirikonda,Parikshit Sakurikar,PJ Narayanan

Journal

arXiv preprint arXiv:2110.07674

Published Date

2021/10/14

We present a neural rendering framework for simultaneous view synthesis and appearance editing of a scene from multi-view images captured under known environment illumination. Existing approaches either achieve view synthesis alone or view synthesis along with relighting, without direct control over the scene's appearance. Our approach explicitly disentangles the appearance and learns a lighting representation that is independent of it. Specifically, we independently estimate the BRDF and use it to learn a lighting-only representation of the scene. Such disentanglement allows our approach to generalize to arbitrary changes in appearance while performing view synthesis. We show results of editing the appearance of a real scene, demonstrating that our approach produces plausible appearance editing. The performance of our view synthesis approach is demonstrated to be at par with state-of-the-art approaches on both real and synthetic data.

Fast Analytic Soft Shadows from Area Lights.

Authors

Aakash Kt,Parikshit Sakurikar,PJ Narayanan

Published Date

2021

In this paper, we present the first method to analytically compute shading and soft shadows for physically based BRDFs from arbitrary area lights.We observe that for a given shading point, shadowed radiance can be computed by analytically integrating over the visible portion of the light source using Linearly Transformed Cosines (LTCs). We present a structured approach to project, re-order and horizon-clip spherical polygons of arbitrary lights and occluders. The visible portion is then computed by multiple repetitive set difference operations. Our method produces noise-free shading and soft-shadows and outperforms raytracing within the same compute budget. We further optimize our algorithm for convex light and occluder meshes by projecting the silhouette edges as viewed from the shading point to a spherical polygon, and performing one set difference operation thereby achieving a speedup of more than 2x. We analyze the run-time performance of our method and show rendering results on several scenes with multiple light sources and complex occluders. We demonstrate superior results compared to prior work that uses analytic shading with stochastic shadow computation for area lights.

Neural view synthesis with appearance editing from unstructured images

Authors

Pulkit Gera,Dhawal Sirikonda,PJ Narayanan

Published Date

2021/12/19

We present a neural rendering framework for simultaneous view synthesis and appearance editing of a scene with known environmental illumination captured using a mobile camera. Existing approaches either achieve view synthesis alone or view synthesis along with relighting, without control over the scene's appearance. Our approach explicitly disentangles the appearance and learns a lighting representation that is independent of it. Specifically, we jointly learn the scene appearance and a lighting-only representation of the scene. Such disentanglement allows our approach to generalize to arbitrary changes in appearance while performing view synthesis. We show results of editing the appearance of real scenes in interesting and non-trivial ways. The performance of our view synthesis approach is on par with state-of-the-art approaches on both real and synthetic data.

See List of Professors in Aakash KT University(International Institute of Information Technology, Hyderabad)

Aakash KT FAQs

What is Aakash KT's h-index at International Institute of Information Technology, Hyderabad?

The h-index of Aakash KT has been 2 since 2020 and 2 in total.

What are Aakash KT's top articles?

The articles with the titles of

Accelerating Hair Rendering by Learning High‐Order Scattered Radiance

Combining Resampled Importance and Projected Solid Angle Samplings for Many Area Light Rendering

Analytical & Neural approaches to Physically Based Rendering

Real-Time Rendering of Arbitrary Surface Geometries using Learnt Transfer✱

Bringing linearly transformed cosines to anisotropic GGX

PRTT: Precomputed Radiance Transfer Textures

Transfer Textures for Fast Precomputed Radiance Transfer

Appearance Editing with Free-viewpoint Neural Rendering

...

are the top articles of Aakash KT at International Institute of Information Technology, Hyderabad.

What are Aakash KT's research interests?

The research interests of Aakash KT are: Computer Graphics, Computer Vision, Computational Photography, Generative Models

What is Aakash KT's total number of citations?

Aakash KT has 16 citations in total.

What are the co-authors of Aakash KT?

The co-authors of Aakash KT are P J Narayanan, Parikshit Sakurikar, Saurabh Saini.

    Co-Authors

    H-index: 33
    P J Narayanan

    P J Narayanan

    International Institute of Information Technology, Hyderabad

    H-index: 8
    Parikshit Sakurikar

    Parikshit Sakurikar

    International Institute of Information Technology, Hyderabad

    H-index: 5
    Saurabh Saini

    Saurabh Saini

    International Institute of Information Technology, Hyderabad

    academic-engine

    Useful Links