[ad_1]

While many people are increasingly losing faith in AI, young Gen Z scientists are embracing its potential and seeking to use it positively. Achyuta Rajaram, 17, of Exeter, New Hampshire, recently won her $250,000 in the Regeneron Science Talent Search for her research work on ethical AI.
Mr. Rajaram’s research focuses on understanding AI algorithms and their decision-making processes. By examining the inner workings of these algorithms, his research improves his AI ethical framework and represents an important step forward in ensuring AI is fair and secure.
Innovation and technology today We spoke to Mr. Rajaram about the inspiration behind his research, the role young scientists can play in helping society embrace AI, and more.
Innovation and technology today: Congratulations on winning first place in Regeneron Science Talent Search 2024! Can you tell us how you started researching AI algorithms and their decision-making processes?

Achyuta Rajaram: I’ve always been interested in studying the nature of intelligence and looking at it as an engineering problem. However, directly studying the human brain is extremely difficult. The scalpel of modern biology is more like a sledgehammer. Minimal intervention cannot be performed on the human brain because all the intertwined systems are so complex.
Given my background in computer science, neural networks felt like a natural place to study the algorithms behind intelligence. We can perform “surgery” on neural networks and manually edit them in ways that are not possible in biological organisms.
This gives us some general insight into the nature of intelligence. One example I like to cite is interpretability research, where humans use Gabor filters for simple, low-level tasks such as edge detection, using the same visual “building blocks” as deep learning models. It turned out that it was implemented.
Therefore, I was mainly inspired by the possibility of “reverse engineering intelligence” by observing neural networks. In addition to this, I was excited about the practical application of understanding what is going on ‘inside’ these large and complex models. This allows us to build systems that are safer, more efficient, and more robust in the real world.
I&T today: How do you think your research will contribute to improving the ethics of AI and making algorithms fairer, safer, and more effective?

rajaram: I think that research on interpretability, including the research I’ve conducted, shows great potential for making algorithms safer, more effective, and more robust. Let’s take a closer look at this.
Efficiency: Neural networks are incredibly expensive to run. Especially in modern times, state-of-the-art methods often require enormous scale, both in terms of billion-item datasets to train them and supercomputers to run them. With a deeper understanding of the internals of your model, you can remove unnecessary components and save computational resources while maintaining overall functionality. My research is particularly applicable to finding redundant “circuits” within larger models. This method of “pruning” model components to increase efficiency promises to democratize access to powerful neural network-based systems.
Safety: As your question indicates, there are many ways that AI, or machine learning systems more broadly, can cause harm. From racial and gender bias in training data to future LLMs that could help create biological weapons. Given these very real risks, I believe that a complete mechanistic understanding of the model’s behavior is the only way to ensure the safety of the system for use in the real world. . As long as neural networks remain “black boxes,” strong guarantees about the fairness and security of these algorithms cannot be obtained. I hope my research will help identify and remove unsafe or biased components within large-scale computer vision systems.
Robustness: There are many examples where vision models make mistakes and fail to generalize in practice. One example I studied in my work is his adversarial text attack on CLIP, a widely used open source model. Specifically, the model has an interesting failure mode: it misclassifies a traffic light when there is a sign next to it that shows the opposite color. We were able to find and remove the component in our model that caused this issue, making our model more robust against this attack. I’m excited about the application of interpretability science to understanding the root causes of these failures in real life.
I&T today: Given that distrust of AI is growing, especially among the general public, how do you think young scientists like you can bridge this gap and foster trust in AI technology? ?
rajaram: First, I think it’s important to see “AI” not as a monolith, but as a collection of various technologies, each with their own capabilities and risks. Given this, trust in AI technology increases as a deeper understanding of the mechanisms of model behavior allows us to establish performance guarantees and fully understand the potential failure modes of deployed models. I think it should increase significantly across the board. I think my duties as a young scientist are twofold. First, to undertake research that alleviates these concerns, and second, to communicate the results to the public to “fill the gap.” I believe that the only way to increase trust is to educate the public more about this technology and its complexities.
I&T today: How do you plan to continue your research journey and pursue your interests in the field of AI and ethics?
rajaram: I plan to continue studying computer science and machine learning at MIT and continue my work in MIT CSAIL’s Torralba Lab building scalable systems that automatically interpret large models.
I&T today: Do you have any advice for other young scientists who want to make a positive impact through their research, especially in fields like AI and technology?

rajaram: I think my main advice is: First of all, please be brave! Research is difficult, especially when you’re working on really important problems. Second, I believe that the people you work with are just as important as the problems you are working on. I am extremely grateful to my research supervisor, Dr. Sarah Schwetman, for supporting me as a person and as a scientist throughout the process of this project. Finally, I think it’s important to understand the state of research across a wide range of subfields, especially in rapidly evolving fields such as AI. I believe that insight can come from anywhere and as long as you keep learning, you can do anything.
[ad_2]
Source link