上海龙凤419

From AI to responsible AI – the future? 

As artificial intelligence(AI) takes a wrong turn with the misuse of deepfakes and generative AI, among others, experts believe that responsible AI can aid in this. It is believed it not only fosters trust, transparency and societal well-being but can also streamline operations, enhance decision-making processes and ensure compliance. The collaboration of AI with regulatory structures is important, as the former empowers efficiency, while the latter safeguards against unintended consequences, biases and possible harm to humanity. “AI can be a consequential technological advancement with the potential to carry high stakes. Any powerful tech can be misused as evidenced by the recent spate of deepfake crimes. This is the time for AI tool companies to factor in ethics, diversity and inclusivity while designing algorithms and make sure that the AI platform does not reflect any harmful stereotypes,” Atul Rai, co-founder, CEO, Staqu Technologies, an AI-based startup, told FE-TransformX.

Sign Up to get access to the Financial Express Exclusive and Premium Stories.Register NowAlready have a account? Sign in

Need to be responsible…

Reportedly, the National Strategy for AI highlighted the need for effective policies and standards to mitigate AI-based risks. The ever-evolving nature of AI can require a robust framework of policies to prevent misuse and bias. An AI-first culture, inherently people-first, is expected to empower human ingenuity and strengthen the relationship between people and technology. With the rise of automated technology, people need to maintain a balance between regulations and automation through responsible AI. Respondents of AI high performers are nearly eight times and their peers say their organisations spend at least 20% of their digital-technology budgets on AI-related technologies, as per McKinsey, a market research platform.

How much responsibility can AI take?

Industry experts believe that ethical lapses in AI can lead to discrimination and biased decision-making, affecting sectors such as the recruitment industry. As AI becomes more autonomous determining responsibility for errors can be a challenge. “ Responsible AI can help to protect privacy and balance ethical considerations with innovation. Also, the important factors to be looked at are interpretability and accountability. It also has the potential to recognise human psychology and its limitations. Moreover, responsible AI has the potential to develop and deploy AI systems with a focus on ethics and transparency, among others, in business structures,” Shriranga Mulay, vice president, development engineering, NTT, a technology and business solutions provider, explained.

AI is expected to redefine industries and economies today. In this light, responsible AI can emerge as a crucial aspect emphasising ethical and accountable use. This adaptation of AI in businesses is expected to be an ongoing process but most sectors expect that AI adoption will grow within their business in the coming years. AI can be a critical factor in 49% of IT-related enterprises by 2025, as per insights from Statista. “ However, there are also ethical concerns, biases in algorithms and potential job displacement (especially at the lower end) which can be addressed by adapting and improving responsible AI frameworks. There is a need to strike a balance between innovation and ethical considerations. The implications can impact not just technology but the fabric of society,” Aashish Mehta, chief executive officer, nRoad, an entripes-ready AI platform, said.

Can AI be responsible?

When misused, AI can cause problems such as being unfair, invading privacy, and making existing inequalities worse, among others. Using AI responsibly can keep businesses from reputational damage and legal ramifications. To do better with AI, it’s important to keep up with evolving ethical standards, regularly update protocols, and ensure everyone in the organisation knows about acting ethically. “In a broad sense, AI can represent technological innovation and automation, but responsible AI adds an essential layer of consciousness to the development and deployment of these technologies. It’s about acknowledging that as the power of AI is unlocked, it must be done responsibly,” Animesh Samuel, CEO, co-founder, E42.ai, a no-code Cognitive Process Automation (CPA) platform, highlighted.

Experts believe that with the Digital Personal Data Protection Act, 2023, there can be safeguards against individuals’ personal data being processed by AI systems without informed consent. The government is also expected to work on the Digital India Act which will have provisions to regulate AI intermediaries and high-risk AI systems. Reportedly, the government expects the Digital India Act to be a principle-based framework which can be tailored through rules to address AI and other emerging technologies as they develop. “While such a framework with executive rules for implementation can enable the Government to adapt quickly to address issues as and when they arise, this can lead to unpredictability and could result in a less accountable legal environment. It is yet to see how the Digital India Act will take form and how the Government plans to address these issues,” Probir Roy Chowdhury, Partner, JSA advocates and solicitors, a law firm, concluded.

Follow us onTwitter,Facebook,LinkedIn

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *