Divyashree Tummalapalli
Assisted chip-design in the era of Large Language
Abstract
Over the years, the chip design process has seen a surge of AI-powered applications that have optimized design workflows, reduced time-to-market, and streamlined engineering efforts. Concurrently, the remarkable advancements in large language models (LLMs) for natural language processing tasks have inspired researchers to explore their potential in adjacent domains which have similarities with natural language, like code. The impressive performance of LLMs in software-development tasks, including code generation, review, and debugging, has naturally led to investigations into their applicability to the hardware development domain as well. In this talk, we seek to shed light on the transformative potential of large language models in streamlining and enhancing the complex chip design process, ultimately contributing to the continued advancements in hardware development.
Bio
Divyasree is an AI Research Scientist at Intel, with over 8 years of experience in exploring and developing innovative AI solutions for accelerating the chip-design flows. Her core interest areas include Graph Deep Learning, Generative AI and Classical Machine Learning algorithms. She has presented her work in forums like IEEE ICECS, WiML-NeurIPS and ICON. Her current work involves using AI and Large Language Models in hardware development by enhancing design-efficiency and accelerating the end-to-end development process. Previously she completed her Mtech. From IIIT Bangalore in 2016. Know more
Divyashree Tummalapalli
Intel