Thursday, 30 January 2025

உத்தரகாண்ட் அரசின் நடவடிக்கையால் சிறுபான்மையினர் கலக்கம்!

 

உத்தரகாண்ட் அரசின் நடவடிக்கையால் சிறுபான்மையினர் கலக்கம்!



மிகப்பெரும் சிறுபான்மைச் சமூகமான 20 கோடிக்கும் அதிகமான முஸ்லீம்களும், 2 கோடியே 60 இலட்சத்திற்கும் அதிகமான கிறிஸ்தவர்களும், ஏறத்தாழ 10 கோடியே 10 இலட்சம் பூர்வகுடி சமூகங்களும், மதநூல்கள் மற்றும் கலாச்சார நெறிமுறைகளால் செல்வாக்குப் பெற்ற தங்கள் சொந்த உரிமையியல் (civil) சட்டங்களைப் பின்பற்றுகிறார்ககப்படும் தங்கள் சொந்த சிவில் சட்டங்களைப் பின்பற்றுகின்றனர்.

செல்வராஜ் சூசைமாணிக்கம் - வத்திக்கான்

வட இந்திய மாநிலமான உத்தரகாண்ட், மத அடிப்படையிலான தனிநபர் சட்டங்களை மாற்றியமைத்து, பொது சிவில் (உரிமையியல்) சட்டத்தை ஏற்றுக்கொண்டதை அடுத்து, கிறிஸ்தவ மற்றும் முஸ்லீம் தலைவர்கள் வருத்தம் தெரிவித்துள்ளதாகக் கூறியுள்ளது யூக்கான் செய்தி நிறுவனம்.

உத்தரகாண்ட் மாநில முதல்வர் புஷ்கர் சிங் தாமி அவர்கள், ஜனவரி 27, இத்திங்களன்று, ஊடகங்களுக்கு அளித்த பேட்டி ஒன்றில் ஒரேமாதிரியான பொது சிவில் சட்டம் (UCC), அனைத்துக் குடிமக்களுக்கும் அவர்களின் மதப் பின்னணியைப் பொருட்படுத்தாமல் சம உரிமைகளை உறுதி செய்யும் என்று கூறியதாகவும் உரைக்கிறது அச்செய்தி நிறுவனம்.

இந்நிலையில், பெரும்பான்மை அடிப்படையிலான மற்றும் சிறுபான்மையினருக்கு எதிரான சட்டத்தை நியாயமானதாக ஏற்றுக்கொள்ள முடியாது என்றும், இது ஒருதலைச் சார்பானது என்றும், டெல்லி உயர்மறைமாவட்டத்தின் கத்தோலிக்கச் சங்கங்கள் கூட்டமைப்பின் தலைவர் ஏ.சி.மைக்கேல் அவர்கள் அம்மாநில அரசின் இச்செயல் குறித்து இச்செய்தி நிறுவனத்திடம் கருத்துத் தெரிவித்துள்ளார்.

மேலும் சிறுபான்மையினரின் மத மற்றும் கலாச்சாரக் குறியீடுகளை மீறும் ஒரு சட்டம் சமம் என்று அழைக்கப்பட முடியாது என்றும் சுட்டிக்காட்டியுள்ள மைக்கேல் அவர்கள், உத்தரகாண்ட் மாநில அரசால் விதிக்கப்பட்ட இந்தச் சட்டத்தை நீதிமன்றம் கேள்விக்கு உட்படுத்த வேண்டும் எனவும், இது விரைவில் நீக்கம் செய்யப்பட வேண்டும் எனவும் வலியுறுத்தியுள்ளார்.

அதேவேளையில், நல்லிணக்கம் மற்றும் அமைதிக்கான மையத்தின் தலைவர் முஹம்மது ஆரிப் அவர்களும், இந்தியா பன்முகத்தன்மை கொண்ட நாடு, ஒவ்வொரு சமூகமும் அதற்குரிய சொந்த மரபுகள் மற்றும் பழக்கவழக்கங்களைப் பின்பற்றுகிறது என்றும், இச்சட்டத்தின் வழி அனைவரையும் ஒரே குடையின் கீழ் கொண்டு வருவது கடினம் என்றும் கூறியுள்ளார்.

Brian Linebaugh on filming ‘Beyond Words’, learning Tamil

India-ல் திடீரென பரவும் GDS: Guillain-Barré Syndrome பாதித்தால் என்ன ஆகு...

Iraq -ல் பேசப்படும் திராவிட மொழி | R Balakrishnan IAS | கீழடி | Indus va...

Tamil Gods vs Vedic Gods: Religious Thoughts, Kingship and Society of An...

Advanced Technology of Ancient Tamils: Iron, Steel & Trade Connections

Wednesday, 29 January 2025

A Review of the Vatican Document - Antiqua et Nova. Edited by: Rev. Robert John Kennedy

 A Review of the Vatican Document - Antiqua et Nova.

Edited by: Rev. Robert John Kennedy

The Vatican document "Antiqua et Nova" offers a comprehensive framework for understanding and navigating the ethical implications of Artificial Intelligence. Here are some key aspects and features:

 * Human-Centered AI:

   * Focus on Human Dignity: The document strongly emphasizes that AI should serve humanity and enhance human flourishing. It warns against AI systems that dehumanize, exploit, or control individuals.

     * Reference: "AI should be used only as a tool to complement human intelligence rather than replace its richness." (Antiqua et Nova, 112)

   * Prioritizing Human Values: The document stresses the importance of aligning AI development with human values, such as justice, solidarity, and respect for human dignity.

 * Ethical Considerations:

   * Accountability: The document emphasizes the importance of accountability for the development, deployment, and use of AI systems. Developers, users, and policymakers are all responsible for ensuring ethical and responsible AI.

   * Transparency and Explainability: The document calls for greater transparency and explainability in AI systems, particularly in critical areas like healthcare and justice.

   * Addressing Bias: The document highlights the risks of bias in AI systems and emphasizes the need to mitigate these biases to ensure fairness and equity.

 * Social and Economic Impact:

   * Addressing Inequality: The document warns against the potential for AI to exacerbate existing inequalities, such as access to healthcare, education, and employment opportunities.

   * Impact on Work: The document acknowledges the potential for job displacement due to AI automation while emphasizing the need to prepare the workforce for the changing job market.

   * Sustainable Development: The document highlights the environmental impact of AI, particularly the energy consumption associated with AI systems, and emphasizes the need for sustainable AI development.

 * AI and Human Relationships:

   * Protecting Human Relationships: The document warns against the potential for AI to dehumanize human interactions and erode social connections. It emphasizes the importance of preserving genuine human relationships and avoiding the pitfalls of AI-driven isolation.

   * The Dangers of Anthropomorphism: The document cautions against anthropomorphizing AI, treating it as a human-like entity. It stresses that AI is a tool and should not be treated as a person or a substitute for human companionship.

 * AI and Warfare:

   * Banning Autonomous Weapons: The document strongly condemns the development and use of autonomous weapons systems that can select and engage targets without human intervention.

     * Reference: "This danger demands serious attention, reflecting the long-standing concern about technologies that grant war ‘an uncontrollable destructive power over great numbers of innocent civilians,’ without even sparing children." (Antiqua et Nova, 101)

Key Features:

 * Human-centered approach: Prioritizes human dignity, values, and well-being.

 * Emphasis on ethical considerations: Focuses on accountability, transparency, and fairness.

 * Holistic perspective: Addresses the social, economic, and environmental impacts of AI.

 * Call for responsible development: Emphasizes the need for careful planning, oversight, and regulation of AI technologies.

 * Focus on human relationships: Warns against the dehumanizing effects of AI and emphasizes the importance of genuine human connection.

This is not an exhaustive list, but it provides a general overview of the key aspects and features of the Vatican document on AI. The document offers a valuable framework for ethical AI development and encourages a thoughtful and responsible approach to this transformative technology.

Bishop Tighe: ‘Antiqua et Nova’ offers guidance on ethical development of AI

 

Bishop Tighe: ‘Antiqua et Nova’ offers guidance on ethical development of AI



As the Holy See releases a document on artificial intelligence, the Secretary of the Dicastery for Culture and Education tells Vatican News about AI’s extraordinary potential and the need for humanity to guide its development with collective responsibility, so that it may be a blessing for all people.

By Devin Watkins

The Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education released a document on Tuesday, January 28, entitled Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence.”

The document seeks to offer guidance for Catholic institutions and humanity as a whole regarding the ethical development and use of AI, according to the Secretary of the Dicastery for Culture and Education.

Speaking to Vatican News, Bishop Paul Tighe said Antiqua et Nova is not the final word on AI but rather hopes to contribute to the debate by providing points for consideration.

“There is a broader understanding of intelligence, which is about our human capacity to find purpose and meaning in life,” he said. “And that is a form of intelligence, which machines can't really replace.”


Here is the full transcript of the interview with Bishop Tighe:

Q The Holy See has just released a document entitled “Antiqua et nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence.” What would you say is ‘new’ in this document and what does it hope to tell the world, especially the Church?

This document is bringing together a lot of reflections that have been developing organically over the last number of years. AI has been on the agenda for about ten years. It's been around for longer, and it's been discussed for longer, but it's hit the public consciousness over the last ten years and very particularly in the last year or so with the emergence of ChatGPT, which put AI tools in the hands of ordinary users.

What we're trying to do at the moment is to bring together the reflections that have been emerging from the Church, from various Church organizations. Here at the Vatican, we have messages for the World Day of Peace and a Message for the World Day of Communications. The Pontifical Academy for Life has been working on this issue with the Rome Call, the Pontifical Academies for the Sciences have been convening scientists to talk about AI, and we've been dealing with the question of education and AI. There's an idea to bring it together and bring something synthetic that unites all the different perspectives that have been emerging organically, and maybe put them in one place.

It's also not the final word; that's the first thing to be said very clearly. This is something that we're going to be living with, that's going to be emerging. But what it is trying to do is to offer people some perspectives from which they can begin to think critically about AI and its potential benefits for society, and then to alert people somewhat to what we need to think about to ensure that we don't inadvertently create something, or allow something to be created, that could be damaging to humanity and to society.

I would say there's a certain cautionary element here. Many of us at the beginning of social media were very quick to embrace its extraordinary potential. We didn't necessarily see the side effects that emerged in terms of polarization, fake news, and other issues.

We want to welcome something that has great potential for human beings. We want to see that potential, and at the same time be attentive to the possible downsides. I think that's what we're trying to do here. One day you read headlines in the newspapers that AI is going to be the salvation of us all. The next day we're reading that it's going to be the annihilation and the end of the world.

We're trying to offer people a more balanced approach. The document focuses on a number of things. There are the headline issues that everybody has thought about: issues about the future of work, about war, about deep fakes, about inequality. And there are ethical issues and societal issues that we want to look at.

But in addressing those, we're also trying to focus on a more basic question about what it means to be human, the anthropological issue of what it means to be human. What is it that gives human life value, purpose, and meaning? We recognize that AI systems can enhance and augment certain parts of our humanity, that is, our ability to reason, to process, to discern, to discover, to see patterns, to make innovations. It can certainly enhance that.

We also want to say that that type of intelligence is not the only type of intelligence. There is a broader understanding of intelligence, which is about our human capacity to find purpose and meaning in life. It's interesting that many of the people working in AI are very clear that they want to put AI at the service of human good, that they want to have person-centered AI; they want AI for humanity. All these titles are there.

Part of the question we have to ask is: what is it that is good for humanity? What is it that promotes human well-being? And that is a form of intelligence, which machines can't really replace. We have to understand that in the Catholic tradition, which is rooted in our own philosophical traditions, not just in Catholicism, our understanding of intelligence is more than simply reasoning, calculation, and processing, but includes also that capacity to look for purpose, meaning, and direction in our lives.

The document tries to open up that wider understanding of intelligence in terms of a number of categories. One, it says, is going beyond pure rationality and moving on to issues, like the fact that a lot of the way we grow as human beings is in dialogue and debate with others. Relationality becomes a key part of what it is to have human intelligence: our ability to learn from others. It's also about embodiment. We're learning more and more that our minds are not separate from our bodies. They are not something that can simply be uplifted and put onto a computer. They're organic. We learn through doing. We learn through our emotions. We learn through our intuitions.

These are important for the human wisdom that grows out of all of that. Calculation is a part of that, but it's not the whole story. And finally, I think what we're concerned with always is searching for ultimate truths, for what is it that gives shape, purpose, and meaning in life. That's something that we may be able to use AI to assist us with certain elements, but in the ultimate analysis, that's a type of intellectual commitment that goes beyond something that can be done simply by a machine.

Q: AI development is evolving at a rapid pace. Why has the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education decided to release this document at this moment?

The Vatican has been attentive to this, and not just the Holy See but the Catholic Church more broadly, and many Catholic universities have been leading reflection on AI and its importance. If we're honest, it's the increased public attention to AI in the last year and a half with the advent of ChatGPT. There are other models available of easily-used systems of artificial intelligence that have given an urgency to it.

Certainly from our perspective, within the world of education, all educators are asking questions about the potential for AI to help in education and the risks if it somehow de-personalizes the nature of education. We’re also responding to questions put to us during ad Limina visits, since the bishops want some orientation.

This document comes about and draws together lots of other initiatives and puts them together. It also gives it a unity of vision, which tries to unite the ethical issues and relate them to that more fundamental anthropological vision of what it is that makes us human.

It was interesting that the United Nations have been trying to work on overall systems for the governance of AI. One of the questions that emerged there—at one stage they said, 'These are obviously questions that raise questions about the future of humanity, but really, we can't do that because there are too many different views about it.'

Also, UNESCO said that AI—and this was the one that struck me very strongly—is leading to what they call an anthropological disruption. Silicon Valley loves the language of disruption, of breaking down to reinvent. But here, when we're talking about the nature of what it is to be human and what it is that makes human life satisfactory. It becomes very important that we reflect critically on that, and that we don't bypass the question about the ultimate meaning of life.

That's where I think issues that emerge strongly in this document are ones about the risk of increased inequality with AI. You can see this, generally, in terms of the distribution of what has happened to digitalization, which has led to an increase of a very small number of extraordinary wealthy people who have extraordinary amounts of power, for which they are not necessarily accountable to other institutions. So, how do we think about making sure that this doesn't serve to fracture the unity of the human family, which is economic unity but is also access to power and information?

One of the areas where there is extraordinary potential for AI is in the area of healthcare. But we know that healthcare tends to be already not very fairly distributed. Will this lead to further inequalities in that area? A lot of our reflection and the timing of this is that we need to have something there to address the debate.

This is not the final word. It can't be the final word, because this is an emerging area. But it's also trying to make sure that we're putting down some markers, some points from which people who are interested in engaging with the debate may be able to grasp and work with.

It is written for the Church and for Catholic institutions, but it's also for all people to offer them, and to say that this is something that is going to have a huge impact on the future of humanity. Let's think about it; let's add our voices to it. And let's not feel that, because technologically, it's quite complicated that we somehow hand over a competence for the bigger questions, which are about our future as human beings.

Q: In the document overall, there seems to be a recognition of AI's potential, accompanied by an undertone of caution about its misuse. Isaac Azimov’s Robot series of novels comes to mind when thinking about humanity’s ultimate relationship with AI. Would you say that the document takes a more embracing or a more cautionary view on AI?

I hope it takes a middle ground, not embracing any of the apocalyptic visions. Neither is it trying to imagine that this is going, of itself, to resolve all human problems. It's trying to see the potential and celebrate the extraordinary achievement that AI is. It's a reflection on humanity's capacity to learn, to innovate, to develop, which is a God-given capacity.

We want to celebrate that. But at the same time, it's saying: we know from past experience so many wonderful innovations that had great potential also became problematic for a number of reasons. Problematic because maybe there were inherent flaws within the systems themselves. Problematic because people could use the same technology for very good things or very bad purposes. Problematic, at times, because the systems—and we're thinking of AI here—has been developed within a particular commercial, political environment and may already be marked by the values of those environments.

We want to think critically about ensuring that AI will ultimately be harnessed by humanity, used by humanity in a way that ensures that it realizes its potential to be good for all human beings.

We had a speaker here recently, Carlo Ratti, an architect. He was talking about technology, and he quoted an American philosopher and architect, Buckminster Fuller, who said about all technology: 'We have the choice either to be architects or victims.' In a sense, this document is inviting people to try and make sure that we are holding people responsible, so that we are effectively going to be architects of something, to ensure, to plan, to determine that it will be used for good, not just leave it to random factors, to commercial considerations, to political advantage. Humanity needs to have ownership of the processes, and be attentive to ensure that there will be a sense of responsibility.

And that's where Azimov's Robot series comes in. Where will the responsibility lie? AI machines will do extraordinary things. We won't be able to understand how they're doing them at times. They're developing a capacity to reprogram themselves and advance forward. So, what we have to do is try and say, where is the responsibility? Many people in the industry now talk about AI being 'ethical by design'. That you should think from the beginning, what are the problems? What are the difficulties? How do we plan in a way that we avoid problems? So, that means, how do we make it secure, that it works well, that it doesn't malfunction. How do we ensure that it's not easily exploited by people who would use it for bad? How do we ensure that the databases which are conditioning AI are actually reflective of the whole of human experience, not just that that has already been digitalized. So, how do we ensure that it is something that reflects the best of us as humans?

Therefore, we always try and hold responsible those who are designing, those who are planning, those who are developing, but also those who are using AI. This is the area of layering out responsibility. In the AI area, one of the areas that has been very interesting are some of the professional associations of engineers, and others who are working in the area, are developing their own code of ethics, because they technically have the competence to develop it, but they're asking the questions: what is it going to be used for, how will it be used, and how do we ensure that it is held accountable to the broader human community?

Q: If you could highlight one aspect of the document, what would it be?

I'm not sure if it's an aspect of the document I would want to highlight, but what I would want to say to people who are likely to read this text, whether they're Catholic or not Catholic, the goal is to try and get as informed as you can be about what's happening here, not to feel disempowered or sidelined. I say this as somebody who is older in life, and saying that to my generation, not just to feel that we cop out.

One thing I would say to people is to begin using the technologies, explore them, see how extraordinary they are, but also begin to be critical of them, to learn how to be able to evaluate them and think about them. So, what I would be taking from this is the importance of responsibility.

Each and every person should think about the level of his or her own responsibility, and that layers up from the user. Am I going to start sharing content that I know is dubious, that I know is there to provoke hate, to take personal responsibility for how I use AI and what I do with it. Then local communities in many parts of the world are asking questions, such as that this is hugely consuming of energy. Will it be sustainable? How do we think about that in our local communities?

Another area that is highlighted in the document—and maybe it's a parochial interest for us here—is the extraordinary contribution of Catholic universities. They have a wonderful mixture of people who are skilled in humanities and philosophy and theology, and also people who have scientific backgrounds. The hope is that we can make those incubators of thought coming from the interdisciplinary and the transdisciplinarity that you have in those universities, where we can begin the conversations between the humanities and the sciences, to ensure that we think about and reflect on responsible development and uses of AI.

 

New Vatican document examines potential and risks of AI



In a Note on the relationship between artificial intelligence and human intelligence, the Dicasteries for the Doctrine of the Faith and for Culture and Education highlight the potential and the challenges of artificial intelligence in the areas of education, the economy, labour, health, human and international relations, and war.

By Salvatore Cernuzio

The Pope’s warnings about Artificial Intelligence in recent years provide the outline for “Antiqua et Nova,” the “Note on the relationship between artificial intelligence and human intelligence,” that offers the results of a mutual reflection between the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education. The new document is addressed especially to “those entrusted with transmitting the faith,” but also to “those who share the conviction that scientific and technological advances should be directed toward serving the human person and the common good” [5].

In 117 paragraphs, “Antiqua et Nova” highlights challenges and opportunities of the development of Artificial Intelligence (AI) in the fields of education, economy, work, health, relationships, and warfare. In the latter sphere, for instance, the document warns of the AI’s potential to increase “the instruments of war well beyond the scope of human oversight and precipitating a destabilizing arms race, with catastrophic consequences for human rights” [99].


Specifically, the document lists not only the risks but also the progress associated with AI, which it encourages as “part of the collaboration of man and woman with God” [2]. However, it does not avoid the concerns that come with all innovations, whose effects are still unpredictable.

Distinguishing between AI and human intelligence

Several paragraphs of the Note are devoted to the distinction between AI and human intelligence. Quoting Pope Francis, the document affirms that “the very use of the word ‘intelligence’ in connection to AI ‘can prove misleading’… in light of this, AI should not be seen as an artificial form of human intelligence, but as a product of it” [35]. “Like any product of human ingenuity, AI can also be directed toward positive or negative ends” [40]. “AI ‘could introduce important innovations’” [48] but it also risks aggravating situations of discrimination, poverty, “digital divide,” and social inequalities [52]. “the concentration of the power over mainstream AI applications in the hands of a few powerful companies raises significant ethical concerns,” including “the risk that AI could be manipulated for personal or corporate gain or to direct public opinion for the benefit of a specific industry” [53].

War

With reference to war, “Antiqua et Nova” stresses that autonomous and lethal weapons systems capable of “identifying and striking targets without direct human intervention are a “cause for grave ethical concern” [100]. It notes that Pope Francis has called for their use to be banned since they pose “an ‘existential risk’ by having the potential to act in ways that could threaten the survival of entire regions or even of humanity itself” [101]. “This danger demands serious attention,” the document says, “reflecting the long-standing concern about technologies that grant war ‘an uncontrollable destructive power over great numbers of innocent civilians,’ without even sparing children” [101].

Human relations

On human relations, the document notes that AI can lead to “harmful isolation” [58], that “anthropomorphizing AI” poses problems for children's growth [60] and that misrepresenting AI as a person is “a grave ethical violation” if this is done “for fraudulent purposes.” Similarly, “using AI to deceive in other contexts—such as education or in human relationships, including the sphere of sexuality—is also to be considered immoral and requires careful oversight” [62].

Economy and labour

The same vigilance is called for in the economic-financial sphere. “Antiqua et Nova” notes that, especially in the field of labour, “while AI promises to boost productivity… current approaches to the technology can paradoxically deskill workers, subject them to automated surveillance, and relegate them to rigid and repetitive tasks” [67].

Health

The Note also dedicates ample space to the issue of healthcare. Recalling the “immense potential” in various applications in the medical field, it warns that if AI were to replace the doctor-patient relationship, it would risk “worsening the loneliness that often accompanies illness” [73]. It also warns that “the integration of AI into healthcare also poses the risk of amplifying other existing disparities in access to medical care,” with the risk of “reinforcing a ‘medicine for the rich’ model, where those with financial means benefit from advanced preventative tools and personalized health information while others struggle to access even basic services” [76].

Education

In the field of education, “Antiqua et Nova” notes that “AI presents both opportunities and challenges.” If used prudently, AI can improve access to education and offer “immediate feedback” to students [80]. One problem is that many programmes “merely provide answers instead of prompting students to arrive at answers themselves or write text for themselves”; which can lead to a failure to develop critical thinking skills [82]. The note also warns of the “biased or fabricated information” and “fake news” some programmes can generate [84].

Fake News and Deepfakes

On the subject of fake news, the document warns of the serious risk of AI “generating manipulated content and false information” [85], which becomes worse when it is spread with the aim of deceiving or causing harm [87]. “Antiqua et Nova” insists that “Those who produce and share AI-generated content should always exercise diligence in verifying the truth of what they disseminate and, in all cases, should ‘avoid the sharing of words and images that are degrading of human beings, that promote hatred and intolerance, that debase the goodness and intimacy of human sexuality or that exploit the weak and vulnerable’” [89].

Privacy and control

On privacy and control, the Note points out that some types of data can go so far as to touch “upon the individual’s interiority, perhaps even their conscience” [90], with the danger of everything becoming “a kind of spectacle to be examined and inspected” [92]. Digital surveillance “can also be misused to exert control over the lives of believers and how they express their faith” [90].

Common home

On the topic of the care of creation, “Antiqua et Nova” says, “AI has many promising applications for improving our relationship with our ‘common home’” [95]. “At the same time, current AI models and the hardware required to support them consume vast amounts of energy and water, significantly contributing to CO2 emissions and straining resources” [96]

The relationship with God

Finally, the Note warns against the risk of humanity becoming “enslaved to its own work” [105]. Artificial intelligence, “Antiqua et Nova” insists, “should be used only as a tool to complement human intelligence rather than replace its richness” [112].

AI: A tool that cannot replace the richness of humanity

 

AI: A tool that cannot replace the richness of humanity



Our Editorial Director explores highlights of the new document on artificial intelligence from the Dicasteries for the Doctrine of the Faith and for Culture and Education.

By Andrea Tornielli

What is misleading, first and foremost, is the name. So-called “Artificial Intelligence” is one of those cases where the name has counted and still counts for a lot in the common perception of the phenomenon.

The Note “Antiqua et nova,” released on Tuesday by the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education, reminds us first of all that AI is a tool: it performs tasks, but it does not think. It is not capable of thinking. It is therefore misleading to attribute human characteristics to it, because it is a “machine” that remains confined to the logical-mathematical sphere. That is, it does not possess a semantic understanding of reality, nor a genuinely intuitive and creative capacity. It is unable to replicate moral discernment or a disinterested openness to what is true, good, and beautiful, beyond any particular utility. In short, it lacks all that is truly and profoundly human. 

Human intelligence is, in fact, individual, while at the same time social, rational, and affective. It lives through continuous relationships mediated by the irreplaceable corporeality of the person. AI should therefore only be used as a tool that complements human intelligence, and not claim to somehow replace the particular richness of the human person.


Despite the progress of research and its possible applications, AI continues to remain a “machine” that has no moral responsibility, which remains instead with those who design and use it.

For this reason, the new document emphasises, it is important that those who make decisions based on AI are held accountable for the choices they make, and that accountability for the use of this tool is possible at every stage of the decision-making process.

Both the ends and the means used in AI applications must be evaluated to ensure that they respect and promote human dignity and the common good. This evaluation constitutes a fundamental ethical criterion for discerning the legitimacy or otherwise of the use of artificial intelligence.

Another criterion for the moral evaluation of AI, the Note suggests, concerns its capacity to implement the positive aspects of the relations of human beings with their surroundings and with the environment to foster a constructive interconnection of individuals and communities, and to enhance a shared responsibility towards the common good.

In order to achieve these goals, it is necessary to go beyond the mere accumulation of data and knowledge, striving to achieve a true “wisdom of the heart,” as Pope Francis suggests, so that the use of artificial intelligence helps human beings to actually become better.

In this sense, the Note warns against any subordination to technology, inviting us not to use technology to progressively replace human labour—which would create new forms of marginalisation and social inequality—but rather as a tool to improve care and enrich services and the quality of human relations. It is also an aid in understanding complex facts and a guide in the search for truth. For this reason, countering AI-fuelled falsifications is not only a job for experts in the field, but requires the efforts of everyone.

We must also prevent artificial intelligence from being used as a form of exploitation or to restrict people’s freedom; to benefit the few at the expense of the many; or as a form of social control, reducing people to a set of data. And it is unacceptable that in the field of warfare, a machine should be entrusted with the choice of taking human lives. Unfortunately, we have seen the devastation caused by artificial intelligence-driven weaponry, and how great that devastation is, as tragically demonstrated in so many current conflicts.

ஏழை நாடுகளின் கடன்களை மன்னிக்க யூபிலி ஆண்டில் அழைப்பு

 

ஏழை நாடுகளின் கடன்களை மன்னிக்க யூபிலி ஆண்டில் அழைப்பு


32 ஆப்ரிக்க நாடுகள் தங்கள் நலஆதரவு நடவடிக்கைகளுக்குச் செலவளிப்பதைவிட வெளிநாட்டு கடன்களுக்கான வட்டியாக ஒவ்வோர் ஆண்டும் வழங்கும் தொகை அதிகம்

கிறிஸ்டோபர் பிரான்சிஸ் - வத்திக்கான்

உலகின் தெற்கு நாடுகள் தங்கள் வெளிநாட்டுக் கடன்களால் கல்வி மற்றும் நல ஆதரவு நடவடிக்கைகளை நிறைவேற்றமுடியா நிலையில் இருப்பதாக இங்கிலாந்தின் பிறரன்பு அமைப்புகள் கவலையை வெளியிட்டுள்ளன.

யூபிலி ஆண்டின் ஒரு பகுதியாக ஏழை நாடுகளின் கடன்களை பணக்கார நாடுகள் மன்னிக்க வேண்டும் என்ற அழைப்பை விடுக்கும் பிறரன்பு அமைப்புக்கள், 32 ஆப்ரிக்க நாடுகள் தங்கள் நலஆதரவு நடவடிக்கைகளுக்குச் செலவளிப்பதைவிட வெளிநாட்டு கடன்களுக்கான வட்டியாக ஒவ்வோர் ஆண்டும் வழங்கும் தொகை அதிகம் என கூறுகின்றன.

2025ஆம் யூபிலி ஆண்டில் ஏழை நாடுகளின் கடன்கள் மன்னிக்கப்பட வேண்டும் என CAFOD, Christian Aid, Save the Children, Oxfam  உள்ளிட்ட இங்கிலாந்தின் பிறரன்பு அமைப்புக்கள் இணைந்து விடுத்துள்ள அழைப்பில், கடன்களுக்கான வட்டியால் கல்வியும் நல ஆதரவுத் திட்டங்கள் மட்டுமல்ல, சுற்றுச்சூழல் பாதிப்புக்களையும் நிவர்த்தி செய்ய முடியாத நிலை உள்ளது என தெரிவிக்கப்பட்டுள்ளது.

இங்கிலாந்து அரசு தான் வழங்கிய கடன்களை அகற்ற முன்வருவது மட்டுமல்ல, இங்கிலாந்திலிருந்து கடன்வழங்கியுள்ள தனியார் நிறுவனங்களையும் அழைத்து அவைகளோடு பேச்சுவார்த்தை நடத்தி ஏழை நாடுகளின் வெளிநாட்டுக் கடனை குறைக்க முன்வரவேண்டும் என்ற அழைப்பையும் இங்கிலாந்தின் பிறரன்பு நிறவனங்கள் இணைந்து விடுத்துள்ளன.

செயற்கை நுண்ணறிவால் எதிர்நோக்கப்படும் சவால்கள் குறித்த அறிக்கை

 

செயற்கை நுண்ணறிவால் எதிர்நோக்கப்படும் சவால்கள் குறித்த அறிக்கை


செயற்கை நுண்ணறிவு என்பது தனித்து நிற்கும் ஒரு நுண்ணறிவு என்ற முறையில் நோக்கப்படாமல் மனித அறிவினால் உருவாக்கப்பட்ட ஒன்றாக நடத்தப்படவேண்டும்

கிறிஸ்டோபர் பிரான்சிஸ் - வத்திக்கான்

செயற்கை நுண்ணறிவுக்கும் மனித அறிவுக்கும் இடையேயான உறவுகள் குறித்து திருப்பீடத்தின் விசுவாசக்கோட்பாட்டுத் துறையும், கலாச்சாரம் மற்றும் கல்விக்கானத் துறையும் இணைந்து கல்வி, பொருளாதாரம், வேலை, நல ஆதரவு, மனிதகுல மற்றும் அனைத்துலக உறவுகளிலும், போரிலும் செயற்கை நுண்ணறிவால் எதிர்நோக்கப்படும் சவால்கள் குறித்து அறிக்கை ஒன்றை வெளியிட்டுள்ளன.

விசுவாசத்தை எடுத்துச் சென்று பரப்புபவர்களுக்கும், அனைத்து அறிவியல் மற்றும் தொழில்துறை முன்னேற்றங்கள் மனிதனுக்கும் பொது நலனுக்கும் சேவையாற்றுவதாக இருக்க வேண்டும் என ஆவல் கொள்பவர்களுக்கும் என வெளியிடப்பட்டுள்ள இந்த Antiqua et Nova என்ற ஏடு, செயற்கை நுண்ணறிவு முன்வைக்கும் ஆபத்துக்கள் பற்றிக் குறிப்பிட்டுள்ளதுடன், செயற்கை நுண்ணறிவு இறைவனுக்கும் மனிதகுலத்திற்கும் இடையேயான ஒத்துழைப்பையும் ஒருபக்கம் எடுத்துரைப்பதாக உள்ளது எனவும் தெரிவித்துள்ளது.

புதிய கண்டுபிடிப்புகளின் உறுதியிட்டுக் கூறமுடியாத விளைவுகள் குறித்த ஆழ்ந்த அக்கறையையும் வெளியிடும் திருப்பீட ஏடு, திருத்தந்தை பிரான்சிஸ் அவர்கள் குறிப்பிட்டுள்ளதுபோல், இந்த செயற்கை நுண்ணறிவு என்பது தனியான ஒரு நுண்ணறிவு என்ற முறையில் நோக்கப்படாமல் மனித அறிவினால் உருவாக்கப்பட்ட ஒன்றாக நடத்தப்படவேண்டும் எனவும் அழைப்பு விடுத்துள்ளது.

செயற்கை நுண்ணறிவு என்பது நல்லவைகள் நோக்கியும் தீயவைகள் நோக்கியும் நம்மை இட்டுச் செல்லலாம் எனக்கூறும் இவ்வேடு, செயற்கை நுண்ணறிவு என்பது புதிய கண்டுபிடிப்புகளுக்கு வழிவகுத்தாலும், பாகுபாடுகள், ஏழ்மை, எண்ணிம இடைவெளி, சரிசமமற்ற நிலைகள், ஒரே இடத்தில் அதிகாரத்தை குவிக்க உதவுதல், ஒரு நிறுவனத்திற்கு ஆதரவாக கருத்துக்களை உருவாக்குதல் போன்ற எதிர்பதங்களுக்கும் பயன்படுத்தப்படும் ஆபத்து உள்ளது என்பதையும் சுட்டிக்காட்டுகிறது.

மனித தலையீடு இல்லாமலேயே செயற்கை நுண்ணறிவு மூலம் ஆயுதங்கள் பயன்படுத்தப்படத் துவங்குவது பெரிய ஒழுக்கரீதி கவலைகளுக்கு இட்டுச் செல்லும் என்பதையும் எடுத்துரைக்கும் திருப்பீடம், போருக்கு உதவும் தொழில்நுட்பங்களால் அப்பாவி மக்கள் குறிப்பாக குழந்தைகள் பாதிப்பை அனுபவிப்பார்கள் என மேலும் கவலையை வெளியிடுகிறது.

செயற்கை நுண்ணறிவு என்பது, கேடுதரும் தனிமைப்படுத்தலுக்கு சமூகத்தின் ஒரு பகுதியை இட்டுச் செல்லலாம் எனவும், பொருளாதாரமும் வேலைவாய்ப்புகளும் பாதிக்கப்படும் விதம் குறித்தும், நல ஆதரவு மற்றும் கல்வியில் அது ஏற்படுத்தும் தாக்கம் குறித்தும், தவறான செய்திகளைப் பரப்புதல், தனியார் சுதந்திரம் பறிக்கப்படுதல், படைப்புக்கும் நமக்குமுள்ள உறவு, இறைவனுடன் நாம் கொண்டிருக்கும் உறவு போன்றவைகளில் பாதிப்புக்களை ஏற்படுத்தும் வாய்ப்பு உள்ளது எனவும் செயற்கை நுண்ணறிவு குறித்து தன் சந்தேகங்களை வெளியிட்டுள்ளது விசுவாசக் கோட்பாடுகளுக்கான திருப்பீடத்துறை, மற்றும் கலாச்சாரம் மற்றும் கல்விக்கான திருப்பீடத்துறை இணைந்து வெளியிட்டுள்ள இவ்வேடு.

அந்த்ரேயா தொர்னியெல்லி: AI என்பது தனித்து சிந்திப்பதற்கு இயலாதது

 

அந்த்ரேயா தொர்னியெல்லி: AI என்பது தனித்து சிந்திப்பதற்கு இயலாதது


செயற்கை நுண்ணறிவு என்பது மனித அறிவுக்கு உதவும் ஒன்றாக இருக்கலாமே ஒழிய அதற்கு பதிலான ஒன்று என ஒரு நாளும் கூறமுடியாது

கிறிஸ்டோபர் பிரான்சிஸ் - வத்திக்கான்

செயற்கை நுண்ணறிவு என்பது ஒரு கருவிதானேயொழிய, தனித்து நின்று இயங்கும் அல்லது சிந்திக்கும் ஒன்றல்ல என்பதால், அதற்கு மனித குணநலன்களை சூட்டுவதோ, நுண்ணறிவு என்ற பெயரைச் சூட்டுவதோ சரியல்ல எனக் கூறியுள்ளார் திருப்பீடச் சமூகத்தொடர்புத் துறையின் செய்திப் பிரிவுத் தலைவர் அந்த்ரேயா தொர்னியெல்லி.

திருப்பீடத்தின் விசுவாசக் கோட்பாட்டுத் துறையும், கலாச்சார மற்றும் கல்வித் துறையும் இணைந்து ஜனவரி 28, செவ்வாய்க்கிழமையன்று வெளியிட்ட செயற்கை நுண்ணறிவு குறித்த Antiqua et nova என்ற ஏடு பற்றி தன் கருத்துக்களை வெளியிட்டபோது தொர்னியெல்லி அவர்கள் இவ்வாறு கூறினார்.

செயற்கை நுண்ணறிவு என்ற மென்பொருள் தன்னிலையிலேயே நின்றுகொண்டு உண்மைதன்மைகளை புரிந்து கொண்டு முடிவு எடுக்கும் ஒன்றல்ல என்பது மட்டுமல்ல, தானாகவே எதையும் உருவாக்கும் ஆற்றலும் அற்றது எனக் கூறும் வத்திக்கான் செய்திப்பிரிவின் தலைவர், ஓர் ஒழுக்கரீதி ஆய்ந்தறியும் தன்மையை தானாகவே அது தனக்குள் கொண்டிருக்கவில்லை என்பதையும் சுட்டிக்காட்டினார்.

மனித அறிவு என்பது தனித்துவம் வாய்ந்தது, சமூகப்பண்பையும், பகுத்தறிவையும், உணர்வுகளையும் உள்ளடக்கியது என்பது மட்டுமல்ல, அது மனித உறவுகளில் தொடர்ந்து வளர்வது எனக்கூறும் தொர்னியெல்லி அவர்கள், செயற்கை நுண்ணறிவு என்பது மனித அறிவுக்கு உதவும் ஒன்றாக இருக்கலாமேயொழிய அதற்கு பதிலான ஒன்று என ஒரு நாளும் கூறமுடியாது என மேலும் தெரிவித்தார்.

AI என்ற இந்த மென்பொருளை உருவாக்கியவர்களின் கட்டுப்பாட்டின்கீழ் இருக்கும் இந்த கருவிக்கு எவ்விதமான ஒழுக்கரீதி சார்ந்த பொறுப்புணர்வும் கிடையாது என மேலும் தெரிவித்துள்ளார்.

செயற்கை நுண்ணறிவு என்ற மென்கருவி எந்த அளவுக்கு மனித மாண்பையும் பொது நலனையும் ஊக்குவிக்கிறது மற்றும் மதிக்கிறது  என்பதையும் ஆராய வேண்டிய தேவை உள்ளதையும் சுட்டிக்காட்டியுள்ளார் வத்திக்கான் செய்திப்பிரிவுத் தலைவர் தொர்னியெல்லி.

சிந்துவெளியில் கண்டுபிடிக்கப்பட்ட தமிழ்நாட்டு காளைகளின் முத்திரைகள் | 90...

Is India The World's Oldest Civilisation? | The News9 Plus Show

Deepseek - AI தொழில்நுட்பத்தில் USக்கு shock கொடுத்த China | Decode | Ch...

Massive Stampede At Maha Kumbh; Many Feared Dead, Injured | Chaotic Scen...

Watch | Buddhist sculptures in Hindu temples of Tamil Nadu: Unveiling hi...

From Sangam Literature to recent findings: Tamil Nadu's Iron Age report

Tuesday, 28 January 2025

5300 years old Iron objects found at Sivagalai in Tamil Nadu, India | UI...

WATCH | India's Iron Age puzzle solved in Tamil Nadu? | The Federal

VinFast VF7 & VF6 மேட் இன் தூத்துக்குடி Electric Cars | Auto Expo 2025 |...

5,000-Year Iron Legacy: Tamil Nadu’s Historic Breakthrough

Tamil Nadu’s Iron Age report is a turning point in Indian archaeology. I...

பறை அது தமிழர் மறை|Traditional paraiaatam|dance#paraidance

அகழாய்வு வெளிப்படுத்தும் அதிசயங்கள்! தமிழர்களை சிகரத்தில் ஏற்றிய சிவகளை!

5 ஆயிரம் ஆண்டுகள் பழமையான இரும்பு | உலகநாடுகள் வியப்பு #தமிழ் #அகழ்வாய்வ...

மேய்ச்சலின் போது மாணவி கண்டுபிடித்த அதிசயம் | Malaimurasu Seithigal | Ar...

Wednesday, 22 January 2025

சென்னை ஆறுகளை ஆராய்ந்த நீரியல் வல்லுநர்கள்! | Chennai | Adayar

காமகோடிக்கு சவால் | Doctor Ravindranath | Challenge | V Kamakoti | IIT M...

"கோமியம்" தமிழ்நாட்டு மக்களின் ரியாக்‌ஷன் இதுதான்..! | Komiyam | Tamil N...

ஒரே நேரத்தில் வானில் அணிவகுக்கும் 6 கோள்கள் | Planetary Parade | 6 plane...

Japan's Newest $3,999 Robot Girl Is Now Available For Everyone

பெருங்கற்கால கற்திட்டைகள் -ஏக்கல்நத்தம் மலைப்பகுதியில் நன்றி Dr. லோகேஷ்,

Petrol செலவு இனி இல்லை; இது புதிய வகை எலக்ட்ரிக் பைக் - ஒரு தமிழரின் கண...

“400 KM Mileage; 0% Noise“- Hydrogen Fuel மூலம் நடக்கப்போகும் Revolution...

Sunday, 19 January 2025

Vairamuthu | பிரதமரின்புதிய அறிவிப்பு.. வரவேற்கும் தமிழ்நாடு.. வைரமுத்து...

டெல்லியில் இப்படி ஒரு பொங்கல் CELEBRATION-ஆ!🤩 மிரளவைத்த SRM University ம...

#BREAKING | விழுப்புரத்தில் அற்புதம்... வெளியே வந்த பேரதிசயம் | Viluppuram

'It's Purely The Way It Sounds...'- Says Brain, California Boy Who Fell ...

CM Stalin | Chennai | Ayalaga tamilar thinam 2025 | அயலகத் தமிழர் தினம் ...

"பிரான்சில் முருகனுக்கு பெரிய அங்கீகாரம் வரப்போகிறது" | Chennai Internat...

1st Time in India | Chennai International Book Fair 2025 | Over 60 Count...

தமிழில் சரளமாக பேசி ஆச்சரியப்பட வைத்த அமெரிக்க எழுத்தாளர் தாமஸ் | Tamil...

CM முன் திருக்குறளை பாடி அசத்திய அமெரிக்க எழுத்தாளர்: அதிர்ந்து போன அரங்...

வரலாற்றை தொலைத்த தமிழர்கள்.,தேடும் ஆராய்ச்சியாளர்கள்!திடுக்கிடும் பல உண்...

Tuesday, 7 January 2025

வியக்கவைக்கும் வரலாற்றுத் தகவல்கள்🔥 நேரம் ஒதுக்கி நிச்சயம் கேளுங்கள்👌 R....

சிந்துவெளியில் கண்டுபிடிக்கப்பட்ட ஜல்லிக்கட்டு காளை இந்த ஊரில் உள்ளது - ...

Tibet Earthquake: பிபிசி தமிழ் தொலைக்காட்சி செய்தியறிக்கை | BBC Tamil TV...

சிந்து சமவெளி நாகரிகம்..தெற்கிலிருந்து எழுதப்படும் இந்திய வரலாறு | Indus...

இந்த புதிருக்கு விடை சொன்னால் ரூ.8.5 கோடி பரிசு அறிவித்த தமிழக அரசு

Tamil Nadu CM Stalin promises $1 million prize for decoding Indus Valley...