cv
Basics
Name | Sravan Ankireddy |
sravan.ankireddy@utexas.edu | |
https://www.linkedin.com/in/sravan-ankireddy/ | |
Url | https://sravan-ankireddy.github.io/ |
Research interests
I am interested in problems at the intersection of Deep Learning and Information Theory. My current research focuses on developing ultra-low rate image compression schemes based on vision foundation models. |
Work
-
2024.05 - 2024.08 Apple
Machine Learning Intern, RF Systems
- Tabular generative models for synthetic data generation to model rare scenarios.
-
2023.05 - 2023.08 Samsung Research America
AI Research Intern, Standards and Mobility Innovation Lab
- Sequence modeling for design of polar codes with Transformers using policy gradient methods.
-
2019.07 - 2021.07 Qualcomm Research India
Research Engineer, WiFi Systems Team
- Developed and deployed multiple transceiver algorithms for next-generation WiFi chipsets.
Education
-
2021.08 - 2025.12 Austin, TX
-
2014.07 - 2019.05 Chennai, India
Publications
-
2024 LightCode: Light Analytical and Neural Codes for Channels with Feedback
IEEE Journal on Selected Areas in Communications (JSAC)
Introduces LightCode, a hybrid analytical and neural code for channels with feedback. Achieves performance gains over existing feedback codes.
-
2024 Nested Construction of Polar Codes via Transformers
Proceedings of the 2024 IEEE International Symposium on Information Theory (ISIT)
Presents a method to construct nested polar codes using Transformer models. Demonstrates performance improvements over conventional polar codes.
-
2024 Exploring Explainability in Video Action Recognition
3rd Explainable AI for Computer Vision (XAI4CV) Workshop at CVPR
Explores explainability techniques for video action recognition models. Proposes novel visualizations and explanations to understand model decisions.
-
2024 Task-Aware Distributed Source Coding Under Dynamic Bandwidth
Advances in Neural Information Processing Systems (NeurIPS)
Introduces a task-aware approach for distributed source coding in dynamic bandwidth environments. Utilizes deep learning to optimize bandwidth usage and improve coding efficiency.
-
2024 DeepPolar: Inventing Nonlinear Large-Kernel Polar Codes via Deep Learning
Proceedings of International Conference on Machine Learning (ICML)
Proposes DeepPolar, a method to invent nonlinear large-kernel polar codes using deep learning techniques. This approach outperforms traditional polar codes in terms of error-correcting performance.
-
2023 Learning RL-Policies for Joint Beamforming Without Exploration: A Batch Constrained Off-Policy Approach
arXiv preprint
Develops a reinforcement learning approach for joint beamforming without exploration, using batch-constrained off-policy methods to optimize performance.
-
2023 Interpreting Neural Min-Sum Decoders
Proceedings of the 2023 IEEE International Conference on Communications (ICC)
Provides insights into neural min-sum decoders by analyzing their decision-making process. Proposes a method to interpret these models for better transparency.
-
2023 Compressed Error HARQ: Feedback Communication on Noise-Asymmetric Channels
Proceedings of the 2023 IEEE International Symposium on Information Theory (ISIT)
Proposes a compressed HARQ feedback mechanism for noise-asymmetric channels. Achieves better reliability and lower overhead than traditional HARQ.
-
2022 TinyTurbo: Efficient Turbo Decoders on Edge
Proceedings of the 2022 IEEE International Symposium on Information Theory (ISIT)
Proposes TinyTurbo, an efficient turbo decoding method designed for edge devices. Demonstrates low latency and reduced resource usage.
Research projects
- 2024.08 - Present
Ultra Low-Rate Neural Image Compression
- Developed an ultra-low rate (<0.1 bpp) compression framework leveraging vision foundation models.
- Designed a novel cross-attention aggregation technique to improve the alignment between input image and reconstructed image using textual captions as side information.
- Improving realism-fidelity trade-off in reconstruction using RLHF guidance and preference datasets.
- 2024.05 - 2024.08
Synthetic Datasets using Tabular Generative Models
- Developed and trained a diffusion-based foundation model for RF calibration, focusing on synthetic data creation to model failures at the receiver.
- Reduced data collection needs by over 10× through the use of high-quality synthetic tabular datasets.
- Improved regression performance by training on synthetic data, achieving ∼22% reduction in MSE.
- 2024.01 - Present
Improving In-Context Learning (ICL) in LLMs using Structured Noise
- Developed techniques to enhance ICL performance by improving the separation of demonstrations.
- Proposed a method to select the optimal separator by analyzing perplexity for each demonstration.
- Formulated an explanation for the empirical observations using Bayesian inference.
- 2023.01 - 2023.05
Data Augmentation using Generative Models
- Explored parameter-efficient fine-tuning methods for text-to-image models for data augmentation.
- Demonstrated gains of up to 3.4% in classification accuracy by augmenting the true datasets with synthetic images generated using low rank approximation (LoRA) with DreamBooth.
- 2023.05 - 2024.01
Construction of Polar Codes using Sequence Modeling
- Modeled Polar code construction as a sequential decision making problem and designed a nested construction technique using transformer models and policy gradient methods.
- Demonstrated significant gains (up to 0.8 dB) compared to patented Polar code in 5G-NR standards.
- 2022.07 - 2023.09
Task-Aware Variable Rate Compression of Distributed Sources
- Designed distributed representation learning algorithm to optimize compression for downstream tasks.
- Proposed a dimensionality reduction technique to encourage low-rank representations, allowing variable-rate compression using a single model.
Skills
Deep Learning | |
Foundation Models | |
LLMs | |
PEFT | |
Diffusion Models | |
Transformers | |
Computer Vision | |
Representation Learning |
Coding | |
Python | |
Pytorch | |
C++ | |
MATLAB |