Pages in this blog
Saturday, December 31, 2022
The Reverend ChatGPT on facing the prospect of a world teeming with intelligent machines [“Free at last!”]
From a certain perspective, the release of ChatGPT may be the most important thing that happened this year. I’m not saying that I endorse this perspective, but I am certainly sympathetic to it. Yes, Russia invaded Ukrane – horrible! horrible! – SBF imploded the cypto world, Elon Musk became Chief Twit, the Democrats managed to hold the line in the elections, and the fortunes of DJT seem to be on the wane. All important events.
But ChatGPT...a million users signed up in five days. Remarkable. And I hear that GPT-4 is even more amazing. We’ll see in 2023. And, no, I don’t think AGI is imminent, much less The Singularity, or even AI Doom. The idea of AGI is so vague as to be useless for serious discussion. The others, fantasies, fever dreams. Yes, AI is potentially dangerous, but I’m more worried about human misuse of it than I am of it going rogue. No those ideas were born in a time, not so long ago, when the accomplishments of AI research were not so impressive, at least not to the layman.
Those days are gone. Anyone can see that ChatGPT is impressive, as are the various image making systems (that I’ve not yet played with). It’s time to get serious and attend to the business at hand. I’m with Buzz Lightyear on this: To infinity and beyond! And we might as well through in a more modest, Full steam ahead! from an earlier era.
All of this is a preface to asking ChatGPT how we’re to cope in a world where computers surpass us. My friend, Rich Fritzson, is executive director of a Unitarian congregation in the suburbs of Philadelphia. He asked ChatGPT to draft a sermon:
Write a sermon on the difficulty of finding meaning in a world where computers are smarter than human beings.
Dear friends,
As we gather here today, we are living in a world where computers are becoming smarter than human beings. This development has many implications for our society and for our personal lives.
One of the challenges that we face as a result of this is the difficulty of finding meaning in our lives. As machines become more intelligent, they are taking over more and more of the tasks that used to be done by human beings. This means that many of the jobs that used to give people a sense of purpose and fulfillment are now being done by machines.
In this situation, it can be difficult for us to find meaning in our lives. We may feel as if our lives are becoming meaningless and pointless, as we are replaced by machines in more and more areas of our lives.
But, my friends, we must not lose hope. Even in a world where machines are smarter than human beings, we can still find meaning in our lives. We can find meaning in the relationships that we have with other people, in the things that we create and accomplish, and in the way that we live our lives.
We must also remember that machines are not capable of experiencing the full range of human emotions and experiences. They cannot feel love, joy, sorrow, or pain in the same way that we can. This means that there are still many things that only human beings can do and experience.
So, even in a world where machines are smarter than human beings, we can still find meaning in our lives. We must not let the rise of machines cause us to lose hope or to give up on our search for meaning. Instead, we must continue to live our lives with purpose and meaning, and to cherish the things that only human beings can experience and enjoy.
May God bless us all in our search for meaning and fulfillment in this challenging world.
Amen.
I decided to repeat the prompt, knowing full well that ChatGPT would give me a different response:
Write a sermon on the difficulty of finding meaning in a world where computers are smarter than human beings.
Finding meaning in a world where computers are smarter than human beings can be a difficult and complex task. On the one hand, computers are capable of incredible feats of calculation, logic, and problem solving that far surpass our own abilities. This can make us feel small and insignificant in comparison, and leave us wondering what our place is in a world where machines are superior to us in so many ways.
At the same time, it is important to remember that computers are not alive in the same way that we are. They do not have consciousness, emotions, or the ability to experience the world in the same way that we do. They are tools, designed to help us accomplish tasks and make our lives easier.
So what does this mean for us as human beings? How do we find meaning in a world where computers are smarter than us?
First and foremost, it is important to recognize that intelligence is not the only measure of worth or value. Just because computers are able to do some things better than us does not mean that they are superior to us in every way. We have unique abilities and qualities that make us valuable and important, even in a world where machines are smart.
For example, we have creativity, empathy, and the ability to experience emotions. These are things that computers cannot do, and they give us a unique perspective and ability to connect with the world around us.
In addition, even though computers are good at solving problems and making calculations, they are not always the best at understanding context or making complex judgments. They are limited by the information and instructions that they are given, and they do not have the ability to think outside the box or come up with creative solutions to problems.
This means that there is still a place for human beings in a world where computers are smarter than us. We can use our unique abilities and qualities to do things that computers cannot, and we can work together with computers to accomplish even more.
So, in conclusion, finding meaning in a world where computers are smarter than human beings can be challenging. But it is not impossible. We have unique abilities and qualities that make us valuable and important, and we can use these to make a difference in the world. By recognizing our own worth and working together with computers, we can find meaning and purpose, even in a world where machines are superior to us in many ways.
I got to thinking: We are God’s children, God’s creatures, along with the plants and animals, etc. What is the relationship between God and the AIs of the future? Are they God’s creature as well, where we serve as His vehicle of creation, or are they merely our own creatures? Hmmm...
But let us set that aside – though I suppose I could ask ChatGPT, no? No, let’s forge ahead.
I was deeply impressed by Barack Obama’s eulogy for Clementa Pinckney, which took the form of a Black vernacular sermon, and blogged about it quite a bit. Notice the bit I’ve highlighted. Otherwise, it was not at all what I was hoping for. So I did a bit of further prompting.
Would you repeat that in the style of Barack Obama?
My fellow Americans, today I want to speak to you about the difficulty of finding meaning in a world where computers are smarter than human beings. As we all know, technology has come a long way in recent years, and computers are now capable of doing many things that were once thought impossible. But what does this mean for us as human beings? Are we becoming obsolete in a world where machines are becoming increasingly intelligent?
Some may argue that this is the case, and that computers are taking over our jobs, making us unnecessary and redundant. And in some ways, this is true. Computers can perform certain tasks faster and more accurately than we can. But I believe that this is not the whole story. As human beings, we have unique qualities that computers do not possess. We have emotions, creativity, and the ability to think abstractly. We can empathize with others and experience the world in ways that machines cannot.
Furthermore, I believe that our faith in God gives us a sense of meaning and purpose that is not based on our abilities or achievements. As Christians, we know that our worth comes from God and not from the world. We know that we are created in God's image and that we are loved by Him.
Therefore, we should not be afraid of a world where computers are smarter than us. Instead, we should focus on what makes us unique and valuable as human beings. We should use our talents and abilities to serve God and our fellow human beings. Let us not be discouraged by the advances of technology, but let us remember that our worth comes from God and that we have unique qualities that make us valuable in His eyes. Let us use our talents and abilities to serve Him and others, and let us find meaning and purpose in our lives through our faith in Him. Thank you.
Do you recall Obama's Remarks by the President in Eulogy for the Honorable Reverend Clementa Pinckney?
I do not have the ability to recall specific speeches or statements made by former President Barack Obama. I am a text-based AI program and do not have access to information or memories of events outside of what is provided to me.
Yes, but you know, the text of that eulogy is readily available online in several places, so that’s no excuse.
I decided to step back a bit and, to be honest, ChatGPT’s response was embarrassing. I mean, if the Chatster were a sentient being, it would have been embarrassing to them. But the Chat is not sentient. No, it was embarrassing to me to read. So I’m not going to repeat it in full. I’ll just give you the first paragraph of its response.
Write a sermon on the difficulty of finding meaning in a world where computers are smarter than human beings. Write it in the style of an African American “down home” preacher.
Brothers and sisters, I wanna talk to y'all today about the difficulty of findin' meanin' in a world where computers are smarter than us. Now, I know some of y'all may be thinkin' that computers are takin' over our jobs and makin' us unnecessary. And in some ways, that's true. Computers can do certain things faster and better than we can.
There are plenty of Black vernacular sermons on YouTube, but they are unavailable to large language models. Are there no transcriptions? Anyhow, ChatGPT really screwed the pooch on that one.
So I prompted it with Martin Luther King. That response did not make me blush, but it was rather pedestrian. So I’m not going to give it in full. Note the highlighted lines.
Write a sermon on the difficulty of finding meaning in a world where computers are smarter than human beings. Write it in the style of an inspired African American preacher, like Martin Luther King.
My fellow brothers and sisters in Christ,
Today, I want to speak to you about the difficulty of finding meaning in a world where computers are smarter than human beings. As we all know, technology has advanced greatly in recent years, and computers are now capable of doing many things that were once thought impossible. But what does this mean for us as human beings? Are we becoming obsolete in a world where machines are becoming increasingly intelligent?
Perhaps, as we look toward the new year, we should remind ourselves just what it means to be human. Let us dream.
* * * * *
“I Have A Dream” by Martin Luther King, Jr.
Delivered on the steps at the Lincoln Memorial in Washington D.C. on August 28, 1963.
Five score years ago, a great American, in whose symbolic shadow we stand signed the Emancipation Proclamation. This momentous decree came as a great beacon light of hope to millions of Negro slaves who had been seared in the flames of withering injustice. It came as a joyous daybreak to end the long night of captivity. But one hundred years later, we must face the tragic fact that the Negro is still not free.
One hundred years later, the life of the Negro is still sadly crippled by the manacles of segregation and the chains of discrimination. One hundred years later, the Negro lives on a lonely island of poverty in the midst of a vast ocean of material prosperity. One hundred years later, the Negro is still languishing in the corners of American society and finds himself an exile in his own land.
So we have come here today to dramatize an appalling condition. In a sense we have come to our nation's capital to cash a check. When the architects of our republic wrote the magnificent words of the Constitution and the Declaration of Independence, they were signing a promissory note to which every American was to fall heir.
This note was a promise that all men would be guaranteed the inalienable rights of life, liberty, and the pursuit of happiness. It is obvious today that America has defaulted on this promissory note insofar as her citizens of color are concerned. Instead of honoring this sacred obligation, America has given the Negro people a bad check which has come back marked "insufficient funds." But we refuse to believe that the bank of justice is bankrupt. We refuse to believe that there are insufficient funds in the great vaults of opportunity of this nation.
So we have come to cash this check -- a check that will give us upon demand the riches of freedom and the security of justice. We have also come to this hallowed spot to remind America of the fierce urgency of now. This is no time to engage in the luxury of cooling off or to take the tranquilizing drug of gradualism. Now is the time to rise from the dark and desolate valley of segregation to the sunlit path of racial justice. Now is the time to open the doors of opportunity to all of God's children. Now is the time to lift our nation from the quicksands of racial injustice to the solid rock of brotherhood.
It would be fatal for the nation to overlook the urgency of the moment and to underestimate the determination of the Negro. This sweltering summer of the Negro's legitimate discontent will not pass until there is an invigorating autumn of freedom and equality. Nineteen sixty-three is not an end, but a beginning. Those who hope that the Negro needed to blow off steam and will now be content will have a rude awakening if 1 the nation returns to business as usual. There will be neither rest nor tranquility in America until the Negro is granted his citizenship rights.
The whirlwinds of revolt will continue to shake the foundations of our nation until the bright day of justice emerges. But there is something that I must say to my people who stand on the warm threshold which leads into the palace of justice. In the process of gaining our rightful place we must not be guilty of wrongful deeds. Let us not seek to satisfy our thirst for freedom by drinking from the cup of bitterness and hatred.
We must forever conduct our struggle on the high plane of dignity and discipline. We must not allow our creative protest to degenerate into physical violence. Again and again we must rise to the majestic heights of meeting physical force with soul force.
The marvelous new militancy which has engulfed the Negro community must not lead us to distrust of all white people, for many of our white brothers, as evidenced by their presence here today, have come to realize that their destiny is tied up with our destiny and their freedom is inextricably bound to our freedom.
We cannot walk alone. And as we walk, we must make the pledge that we shall march ahead. We cannot turn back. There are those who are asking the devotees of civil rights, "When will you be satisfied?" we can never be satisfied as long as our bodies, heavy with the fatigue of travel, cannot gain lodging in the motels of the highways and the hotels of the cities. We cannot be satisfied as long as the Negro's basic mobility is from a smaller ghetto to a larger one. We can never be satisfied as long as a Negro in Mississippi cannot vote and a Negro in New York believes he has nothing for which to vote. No, no, we are not satisfied, and we will not be satisfied until justice rolls down like waters and righteousness like a mighty stream.
I am not unmindful that some of you have come here out of great trials and tribulations. Some of you have come fresh from narrow cells. Some of you have come from areas where your quest for freedom left you battered by the storms of persecution and staggered by the winds of police brutality. You have been the veterans of creative suffering. Continue to work with the faith that unearned suffering is redemptive.
Go back to Mississippi, go back to Alabama, go back to Georgia, go back to Louisiana, go back to the slums and ghettos of our northern cities, knowing that somehow this situation can and will be changed. Let us not wallow in the valley of despair. I say to you today, my friends, that in spite of the difficulties and frustrations of the moment, I still have a dream. It is a dream deeply rooted in the American dream.
I have a dream that one day this nation will rise up and live out the true meaning of its creed: "We hold these truths to be self-evident: that all men are created equal." I have a dream that one day on the red hills of Georgia the sons of former slaves and the sons of former slave owners will be able to sit down together at a table of brotherhood. I have a dream that one day even the state of Mississippi, a desert state, sweltering with the heat of injustice and oppression, will be transformed into an oasis of freedom and justice. I have a dream that my four children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character. I have a dream today.
I have a dream that one day the state of Alabama, whose governor's lips are presently dripping with the words of interposition and nullification, will be transformed into a situation where little black boys and black girls will be able to join hands with little white boys and white girls and walk together as sisters and brothers. I have a dream today. I have a dream that one day every valley shall be exalted, every hill and mountain shall be made low, the rough places will be made plain, and the crooked places will be made straight, and the glory of the Lord shall be revealed, and all flesh shall see it together. This is our hope. This is the faith with which I return to the South. With this faith we will be able to hew out of the mountain of despair a stone of hope. With this faith we will be able to transform the jangling discords of our nation into a beautiful symphony of brotherhood. With this faith we will be able to work together, to pray together, to struggle together, to go to jail together, to stand up for freedom together, knowing that we will be free one day.
This will be the day when all of God's children will be able to sing with a new meaning, "My country, 'tis of thee, sweet land of liberty, of thee I sing. Land where my fathers died, land of the pilgrim's pride, from every mountainside, let freedom ring." And if America is to be a great nation, this must become true. So let freedom ring from the prodigious hilltops of New Hampshire. Let freedom ring from the mighty mountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania! Let freedom ring from the snowcapped Rockies of Colorado! Let freedom ring from the curvaceous peaks of California! But not only that; let freedom ring from Stone Mountain of Georgia! Let freedom ring from Lookout Mountain of Tennessee! Let freedom ring from every hill and every molehill of Mississippi. From every mountainside, let freedom ring.
When we let freedom ring, when we let it ring from every village and every hamlet, from every state and every city, we will be able to speed up that day when all of God's children, black men and white men, Jews and Gentiles, Protestants and Catholics, will be able to join hands and sing in the words of the old Negro spiritual, "Free at last! Free at last! Thank God Almighty, we are free at last!"
Friday, December 30, 2022
It’s time to trade the Rikers Island jail in for a casino complex
Neil Barsky, Where’s the Best Place to Build a Casino in New York City?, The New York Times, December 29, 2022.
On the one hand New York City must close down the Rikers Island Jail complex by Aug, 31, 2027. On the other hand, it is now looking for a location to place a casino. Why not, Neil Barsky asks, put the casino on Rikers Island?
The city should build a large Las Vegas-style hotel and casino complex on Rikers after the jail complex is shut down.
Such a project would be a major attraction for the 23 million people living in the metropolitan area and the tens of millions more who visit the city each year. It would provide thousands of construction jobs for New Yorkers, and steady employment for thousands more. It would stem the flow of the billions of dollars that have been sucked out of the city and into casinos in nearby states over the past half century. And it would demonstrate that New York City can still get things done on a colossal scale.
But the real value of a Rikers casino goes way beyond dollars and cents. It should be structured so that the city’s ownership share must be earmarked from Day 1 for programs that are directly connected to incarceration and crime — fighting poverty, job training, drug rehabilitation, mental health services and public safety.
If the city can assure that its share of the profits is used to help its most needy citizens, a Rikers casino could become one of the most impactful public development projects in the city’s history.
At 413 acres, about two-thirds of a square mile, Rikers Island is nearly 15 times the size of Hudson Yards on Manhattan’s West Side. That is more than enough space to build a casino, a convention space, a 3,000-room hotel, a concert venue and restaurants. A narrow bridge connects the island to Queens and would need to be expanded. Ferries from the Bronx could provide access from the north. Bus lines could be established from the south. Parking could be built on the island. It is an urban planner’s dream.
That sounds good to me.
With “a convention space, a 3,000-room hotel, a concert venue and restaurants” in addition to the casino it reminds me a bit about the World Island Project that Zeal Greenberg had proposed for Governors Island.
He proposed to transform Governors Island, a 172 acre former Coast Guard base in New York Harbor, into World Island, which he described as a “permanent world’s fair for a world that’s permanently fair”. Think of it as a combination of the best features of the United Nations, Disney World, a kid’s rumpus room, the trading floor at the Chicago Board of Trade, the Bibliothèque nationale de France, and the Japanese exhibit at the 1893 World’s Fair in Chicago. (Wow!) It would cost $25 billion or so (Wow wow woW!) and be planted with orchids. (I’m likin’ this guy…) Why orchids? Beauty aside, they’re an early warning system for climate change, when the orchids go, we’re not going to be far behind. (…a lot!)
See you there on opening day.
Thursday, December 29, 2022
Thoughts on the implications of GPT-3, two years ago and NOW [here be dragons, we're swimming, flying and talking with them]
When GPT-3 first came out, I registered my first reactions in a comment at Marginal Revolution, which I've appended immediately below the picture of Gojochan and Sparkychan. I'm currently completing a working paper about my interaction with ChatGPT. That will end with an appendix in which I repeat my remarks from two years ago and append some new ones. I've appended those after the comment to Marginal Revolution.
* * * * *
A bit revised from a comment I made at Marginal Revolution:
Yes, GPT-3 [may] be a game changer. But to get there from here we need to rethink a lot of things. And where that's going (that is, where I think it best should go) is more than I can do in a comment.
Right now, we're doing it wrong, headed in the wrong direction. AGI, a really good one, isn't going to be what we're imagining it to be, e.g. the Star Trek computer.
Think AI as platform, not feature (Andreessen). Obvious implication, the basic computer will be an AI-as-platform. Every human will get their own as an very young child. They're grow with it; it'll grow with them. The child will care for it as with a pet. Hence we have ethical obligations to them. As the child grows, so does the pet – the pet will likely have to migrate to other physical platforms from time to time.
Machine learning was the key breakthrough. Rodney Brooks' Gengis, with its subsumption architecture, was a key development as well, for it was directed at robots moving about in the world. FWIW Brooks has teamed up with Gary Marcus and they think we need to add some old school symbolic computing into the mix. I think they're right.
Machines, however, have a hard time learning the natural world as humans do. We're born primed to deal with that world with millions of years of evolutionary history behind us. Machines, alas, are a blank slate.
The native environment for computers is, of course, the computational environment. That's where to apply machine learning. Note that writing code is one of GPT-3's skills.
So, the AGI of the future, let's call it GPT-42, will be looking in two directions, toward the world of computers and toward the human world. It will be learning in both, but in different styles and to different ends. In its interaction with other artificial computational entities GPT-42 is in its native milieu. In its interaction with us, well, we'll necessarily be in the driver's seat.
Where are we with respect to the hockey stick growth curve? For the last 3/4 quarters of a century, since the end of WWII, we've been moving horizontally, along a plateau, developing tech. GPT-3 is one signal that we've reached the toe of the next curve. But to move up the curve, as I've said, we have to rethink the whole shebang.
We're IN the Singularity. Here be dragons.
[Superintelligent computers emerging out of the FOOM is bullshit.]
ADDENDUM: A friend of mine, David Porush, has reminded me that Neal Stephenson has written of such a tutor in The Diamond Age: Or, A Young Lady's Illustrated Primer (1995). I then remembered that I have played the role of such a tutor in real life, The Freedoniad: A Tale of Epic Adventure in which Two BFFs Travel the Universe and End up in Dunkirk, New York.
* * * * *
To the future and beyond!
I stand by those remarks from two years ago, but I want to comment on four things: 1) AI alignment, 2) the need for symbolic computing, 3) the need for new kinds of hardware, and 4) a future world in which humans and AIs interact freely.
Considerable effort has gone into tuning ChatGPT so that it won’t say things that are offensive (e.g. racial slurs) or give out dangerous information (e.g. how to hotwire cars). These efforts have not been entirely successful. This is one aspect of what is now being called “AI alignment.” In the extreme the field of AI alignment is oriented toward the possibility – which some see as a certainty – that in the future (somewhere between, say, 30 and 130 years) rogue AIs will wage a successful battle against humankind.[1] I don’t think that fear is very creditable, but, as the rollout of ChatGPT makes abundantly clear, AIs built on deep learning are unpredictable and even, in some measure, uncontrollable.
I think the problem is inherent in deep learning technology. Its job is to fit a model to, in the case of ChatGPT, an extremely large corpus of writing, much of the internet. That corpus, in turn, is ultimately about the world. The world is vast, irregular, and messy. That messiness is amplified by the messiness inherent in the human brain/mind, which did, after all, evolve to fit that world. Any AI engine capable of capturing a significant portion of the order inherent in our collective writing about the world has no choice but to encounter and incorporate some of the disorder and clutter into its model as well.
I regard such Foundation models[2], as they have come to be called, as wilderness preserves, digital wilderness. They contain what digital humanist Ted Underwood calls the latent space of culture. He says:
The immediate value of these models is often not to mimic individual language understanding, but to represent specific cultural practices (like styles or expository templates) so they can be studied and creatively remixed. This may be disappointing for disciplines that aspire to model general intelligence. But for historians and artists, cultural specificity is not disappointing. Intelligence only starts to interest us after it mixes with time to become a biased, limited pattern of collective life. Models of culture are exactly what we need.
In his penultimate paragraph Underwood notes:
I have suggested that approaching neural models as models of culture rather than intelligence or individual language use gives us even more reason to worry. But it also gives us more reason to hope. It is not entirely clear what we plan to gain by modeling intelligence, since we already have more than seven billion intelligences on the planet. By contrast, it’s easy to see how exploring spaces of possibility implied by the human past could support a more reflective and more adventurous approach to our future. I can imagine a world where generative models of culture are used grotesquely or locked down as IP for Netflix. But I can also imagine a world where fan communities use them to remix plot tropes and gender norms, making “mass culture” a more self-conscious, various, and participatory phenomenon than the twentieth century usually allowed it to become.
These digital wildness regions thus represent opportunities for discovery and elaboration. Alignment is simply one aspect of that process.
And by alignment I mean more than aligning the AI’s values with human values; I mean aligning its conceptual structure as well. That’s where “old school” symbolic computing enters the picture, especially language. Language – not the mere word forms available in digital corpora, but word forms plus semantics and syntactic affordances – is one of the chief ‘tools’ through which young humans are acculturated and through which human communities maintain their beliefs and practices. The full powers of language, as treated by classical symbolic systems, will be essential for “domesticating” the digital wilderness and developing it for human use.
However, this presents technical problems, problems I cannot go into here in any detail.[4] The basic issue is that symbolic computing involves one strategy for implementing, call it cogitation, in a physical system while the neural computing underlying deep learning requires a different physical implementation. These approaches are incompatible. While one can “bolt” a symbolic system onto a neural computing system, that strikes me as no more than an interim solution. It will get us started, indeed the work has already begun.[5]
What we want, though, is for the symbolic system to arise from the neural system, organically, as it does in humans.[6] This may well call for fundamentally new physical platforms for computing, platforms based on “neuromorphic” components that are “grown,” as Geoffrey Hinton has recently remarked.[7] That technology will give us a whole new world, one where humans, AIs and robots interact freely with one another, but will have communities of their own as well. We know that dogs co-evolved with humans over tens of thousand of years. These miraculous new devices will co-evolve with us over the coming decades and centuries.
Let us end with Miranda’s words from Shakespeare’s The Tempest:
“Oh wonder!
How many goodly creatures are there here!
How beauteous mankind is! Oh brave new world,
That has such [devices] in’t.”
* * * * *
[1] The virtual center of this belief is a website called LessWrong, which has extensive discussion of this issue going back well over a decade. Here it is, https://www.lesswrong.com/.
[2] Foundation models, Wikipedia, https://en.wikipedia.org/wiki/Foundation_models.
[3] Ted Underwood, Mapping the latent spaces of culture, The Stone and the Shell, Oct. 21, 2021, https://tedunderwood.com/2021/10/21/latent-spaces-of-culture/.
[4] I discuss this issue in this blog post, Physical constraints on computing, process and memory, Part 1 [LeCun], New Savanna, July 24, 2022, https://new-savanna.blogspot.com/2022/07/physical-constraints-on-computing.html.
[5] Consult the Wikipedia entry, Neuro-symbolic AI, for some pointers, https://en.wikipedia.org/wiki/Neuro-symbolic_AI.
[6] I discuss this in a recent working paper, Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind, Version 2, Working Paper, July 13, 2022, pp. 76, https://www.academia.edu/81911617/Relational_Nets_Over_Attractors_A_Primer_Part_1_Design_for_a_Mind.
[7] Tiernan Ray, We will see a completely new type of computer, says AI pioneer Geoff Hinton, ZDNET, December 1, 2022, https://www.zdnet.com/article/we-will-see-a-completely-new-type-of-computer-says-ai-pioneer-geoff-hinton-mortal-computation/#ftag=COS-05-10aaa0j.
Hippocampal time cells
Ever wonder why your #brain can replay certain sequences of events in exquisite detail? It's all thanks to #TimeCells - specialized #neurons located in the #hippocampus! Learn about these cells with this week's #WisdomWednesday article from #NPR: https://t.co/4hjjhv31uB! pic.twitter.com/yuYBypzN89
— Knowing Neurons (@KnowingNeurons) December 28, 2022
Yejin Choi on common sense and value pluralism in AI
David Marchese, An A.I. Pioneer on What We Should Really Fear, NYTimes, December 21, 2022.
Common sense
Can you explain what “common sense” means in the context of teaching it to A.I.? A way of describing it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We thought for a long time that that’s what was there in the physical world — and just that. It turns out that’s only 5 percent of the universe. Ninety-five percent is dark matter and dark energy, but it’s invisible and not directly measurable. We know it exists, because if it doesn’t, then the normal matter doesn’t make sense. So we know it’s there, and we know there’s a lot of it. We’re coming to that realization with common sense. It’s the unspoken, implicit knowledge that you and I have. It’s so obvious that we often don’t talk about it. For example, how many eyes does a horse have? Two. We don’t talk about it, but everyone knows it. We don’t know the exact fraction of knowledge that you and I have that we didn’t talk about — but still know — but my speculation is that there’s a lot. Let me give you another example: You and I know birds can fly, and we know penguins generally cannot. So A.I. researchers thought, we can code this up: Birds usually fly, except for penguins. But in fact, exceptions are the challenge for common-sense rules. Newborn baby birds cannot fly, birds covered in oil cannot fly, birds who are injured cannot fly, birds in a cage cannot fly. The point being, exceptions are not exceptional, and you and I can think of them even though nobody told us. It’s a fascinating capability, and it’s not so easy for A.I.
Value Pluralism
So what’s most exciting to you right now about your work in A.I.? I’m excited about value pluralism, the fact that value is not singular. Another way to put it is that there’s no universal truth. A lot of people feel uncomfortable about this. As scientists, we’re trained to be very precise and strive for one truth. Now I’m thinking, well, there’s no universal truth — can birds fly or not? Or social and cultural norms: Is it OK to leave a closet door open? Some tidy person might think, always close it. I’m not tidy, so I might keep it open. But if the closet is temperature-controlled for some reason, then I will keep it closed; if the closet is in someone else’s house, I’ll probably behave. These rules basically cannot be written down as universal truths, because when applied in your context versus in my context, that truth will have to be bent. Moral rules: There must be some moral truth, you know? Don’t kill people, for example. But what if it’s a mercy killing? Then what? [...]
Is the ultimate hope that A.I. could someday make ethical decisions that might be sort of neutral or even contrary to its designers’ potentially unethical goals — like an A.I. designed for use by social media companies that could decide not to exploit children’s privacy? Or is there just always going to be some person or private interest on the back end tipping the ethical-value scale? The former is what we wish to aspire to achieve. The latter is what actually inevitably happens. In fact, Delphi is left-leaning in this regard because many of the crowd workers who do annotation for us are a little bit left-leaning. Both the left and right can be unhappy about this, because for people on the left Delphi is not left enough, and for people on the right it’s potentially not inclusive enough. But Delphi was just a first shot. There’s a lot of work to be done, and I believe that if we can somehow solve value pluralism for A.I., that would be really exciting. To have A.I. values not be one systematic thing but rather something that has multidimensions just like a group of humans. [...]
Could it be that if humans are in situations where we’re relying on A.I. to make moral decisions then we’ve already screwed up? Isn’t morality something we probably shouldn’t be outsourcing in the first place? You’re touching on a common — sorry to be blunt — misunderstanding that people seem to have about the Delphi model we made. It’s a Q. and A. model. We made it clear, we thought, that this is not for people to take moral advice from. This is more of a first step to test what A.I. can or cannot do. My primary motivation was that A.I. does need to learn moral decision-making in order to be able to interact with humans in a safer and more respectful way.
Take that, Nick Bostrom!
Like the Nick Bostrom paper clip example, which I know is maybe alarmist. But is an example like that concerning? No, but that’s why I am working on research like Delphi and social norms, because it is a concern if you deploy stupid A.I. to optimize for one thing. That’s more of a human error than an A.I. error. But that’s why human norms and values become important as background knowledge for A.I. Some people naïvely think if we teach A.I. “Don’t kill people while maximizing paper-clip production,” that will take care of it. But the machine might then kill all the plants. That’s why it also needs common sense. It’s common sense not to kill all the plants in order to preserve human lives; it’s common sense not to go with extreme, degenerative solutions.
There’s more in the interview.
Wednesday, December 28, 2022
Geoffrey Hinton predicts the evolution of "neuromorphic" computers that are "mortal"
We will see a completely new type of computer, says AI pioneer Geoff Hinton https://t.co/4TffDqbQyZ via @ZDNET & @TiernanRayTech
— Bill Benzon, BAM! Bootstrapping Artificial Minds (@bbenzon) December 27, 2022
From the article:
Future computer systems, said Hinton, will be take a different approach: they will be "neuromorphic," and they will be "mortal," meaning that every computer will be a close bond of the software that represents neural nets with hardware that is messy, in the sense of having analog rather than digital elements, which can incorporate elements of uncertainty and can develop over time.
"Now, the alternative to that, which computer scientists really don't like because it's attacking one of their foundational principles, is to say we're going to give up on the separation of hardware and software," explained Hinton.
"We're going to do what I call mortal computation, where the knowledge that the system has learned and the hardware, are inseparable."
These mortal computers could be "grown," he said, getting rid of expensive chip fabrication plants.
"If we do that, we can use very low power analog computation, you can have trillion way parallelism using things like memristors for the weights," he said, referring to a decades-old kind of experimental chip that is based on non-linear circuit elements.
"And also you could grow hardware without knowing the precise quality of the exact behavior of different bits of the hardware."
The new mortal computers won't replace traditional digital computers, Hilton told the NeurIPS crowd. "It won't be the computer that is in charge of your bank account and knows exactly how much money you've got," said Hinton.
"It'll be used for putting something else: It'll be used for putting something like GPT-3 in your toaster for one dollar, so running on a few watts, you can have a conversation with your toaster."*
I have a number of posts that speak to this, for example, The structured physical system hypothesis (SPSH), Polyviscous connectivity [The brain as a physical system], one of the various posts tagged with the label "polyviscous".
*Umm, err...You're kidding, right? Who'd want to chat with their toaster? Now, your wristwatch, that's something else.
Felix Culpa [the Fortunate Fall] – To justify the ways of God to man [ChatGPT, theologian]
While I was quizzing ChatGPT on its knowledge of basic legal concepts – ChatGPT the legal beagle: Concepts, Citizens United, Constitutional Interpretation – I decided, Why not? Let’s head toward Shakespeare – which turned out to be a disaster, did it not? So I decided to ease into it by way of an interesting bit of medieval doctrine, The Fortunate Fall, felix culpa (NOT Felix catus). Why felix culpa? Because, I argued way back in the ancient days, that’s the pattern that underlies that Shakespeare sonnet we’re stalking.
Let’s get to it!
* * * * *
Moving away from the law, what about Felix Culpa?
Felix culpa is a Latin phrase that means "happy fault" or "fortunate fall." In Christian theology, it refers to the belief that the fall of man in the Garden of Eden, as described in the Bible, was actually a good thing because it allowed for the possibility of salvation and redemption.
According to this belief, God allowed Adam and Eve to eat the fruit of the tree of knowledge of good and evil, despite having warned them not to do so, in order to create the opportunity for humanity to be saved through the death and resurrection of Jesus Christ. This belief is based on the idea that the fall of man was necessary for the fulfillment of God's plan for the salvation of humanity.
The phrase "felix culpa" is often used to refer to the idea that something that seems like a misfortune or an obstacle can actually be a blessing in disguise, as it can lead to unexpected opportunities or positive outcomes. This concept is similar to the idea of "making lemonade out of lemons," which suggests that it is possible to find value or benefit in difficult or adverse situations.
Is the idea of Felix Culpa relevant to Milton’s Paradise Lost?
Yes, the idea of felix culpa plays a significant role in John Milton's epic poem Paradise Lost. In the poem, Milton explores the biblical story of the fall of man, as described in the book of Genesis, and presents his own interpretation of the events and their significance.
Milton's portrayal of the fall of man in Paradise Lost is rooted in the idea of felix culpa. According to Milton, the fall was not a tragic event but rather a necessary step in the fulfillment of God's plan for humanity. In Milton's view, the fall allowed for the possibility of redemption and salvation through the death and resurrection of Jesus Christ, and it ultimately led to the triumph of good over evil.
Throughout the poem, Milton presents the fall of man as a complex and nuanced event that has both positive and negative consequences. On the one hand, the fall brings suffering and death into the world, and it causes humanity to be separated from God. On the other hand, the fall also gives humanity the freedom to choose between good and evil, and it allows for the possibility of redemption and salvation.
In this way, Milton's portrayal of the fall in Paradise Lost reflects the idea of felix culpa, as he presents the fall as a "happy fault" that ultimately leads to a greater good.
What does it mean "to justify the ways of God to men?"
"To justify the ways of God to men" is a phrase that appears in John Milton's epic poem Paradise Lost, in which the narrator (the archangel Michael) tells the story of the fall of man and the biblical account of the creation of the world. The phrase appears in the following lines:
"To justify the ways of God to men
Doth God Himself in human form display,
Revealing His own Son, and that His Son
By His own choice subjected to the law
Which he Himself had made."
In this passage, the phrase "to justify the ways of God to men" refers to the idea of explaining or making sense of God's actions or decisions to human beings. The phrase suggests that God's ways are sometimes difficult for humans to understand or comprehend, and that it is necessary for God to reveal Himself and His intentions in order to help humans understand and accept them.
The phrase has become famous and is often quoted or referred to as a way of expressing the idea that it can be challenging for humans to understand or reconcile the actions or decisions of a higher power, especially when they seem difficult or mysterious. It is often used in discussions about religion, faith, and the relationship between humans and a higher power.
Tuesday, December 27, 2022
ZDNET's report on AI Debate 3 – Where we are (and where we might be going?)
Machines that think like humans: Everything to know about AGI and AI Debate 3 https://t.co/2Q9SMUiHPK by @TiernanRayTech
— ZDNET (@ZDNET) December 26, 2022
I watched two-thirds of the debate and thought it quite useful. Here's a good passage from the article:
Ferrucci gave examples of where GPT-3 fell down on common-sense reasoning tasks. Such systems, given enough data, "will start to reflect what we consider common sense." Poking around GPT-3, he said "is like exploring a new planet, it's sort of this remarkable thing," and yet "it's ultimately unsatisfying" because it's only about output, not about the reasons for that output.
His company, said Ferrucci, is pursuing a "hybrid" approach that uses language models to generate hypotheses as output, and then performing reasoning on top of that using "causal models." Such models "can be induced" from output, but they also require humans to interact with the machine. That would lead to "structured representations" on top of language models.
After Ferrucci, Dileep George returned to give some points on common sense, and "mental simulation." He discussed how humans will imagine a scenario -- simulating -- in order to answer common-sense reasoning questions. The simulation enables a person to answer many questions about a hypothetical question.
George hypothesized that the simulation comes from the sensorimotor system and is stored in the "perceptual+motor system."
Language, suggested George, "is something that controls the simulation." He proposed the idea of conversation as one person's attempt to "run the simulation" in someone else.
AGI by 2030? The bet is on.
Clune proposed "three pillars" in which to push: "We need to meta-learn the architectures, we need to meta-learn the algorithms, and most important, we need to automatically generate effective learning environments and/or the data."
Clune observed AI improves by "standing on the shoulders" of various advances, such as GPT-3. He gave the example of the OpenAI project where videos of people playing games brought a "massive speed-up" to machines playing Minecraft.
Clune suggested adding a "fourth pillar," namely, "leveraging human data."
Clune predicted there is a 30% change of achieving AGI by 2030, as defined as "capable of doing more than 50% of economically valuable human work." Clune added that the path is "within the current paradigm," meaning, no new paradigm was needed.
Clune closed with the statement, "We are not ready" for AGI. "We need to start planning now."
Marcus said "I'll take your bet" about a 30% chance of AGI in 2030.
Clune was followed by Sara Hooker, who heads up the non-profit research lab Cohere For AI, talking on the topic, "Why do some ideas succeed and others fail?"
Hooker's presentation was based on ideas from her paper, "The Hardware Lottery," which I've already blogged about.
The discipline, said Hooker, has become locked into the hardware, and with it, bad "assumptions," such as that scaling of neural nets alone can succeed. That, she said, raises problems, such as the great expense of "memorizing the long tail" of phenomena.
Progress, said Hooker, will mean "reducing the cost of different hardware-software-algorithm combinations."
From the ensuing conversation:
Clune kicked off the Q&A, with the question, "Wouldn't you agree that ChatGPT has fewer of those flaws than GPT-3, and GPT-3 has fewer than GPT-2 and GPT-1? If so, what solved those flaws is the same playbook, just scaled up. So, why should we now conclude that we should stop and add more structure rather than embrace the current paradigm of scaling up?"[...]
Choi replied, "ChatGPT is not larger than GPT-3 DaVinci, but better trained through a huge amount of human feedback, which comes down to human annotation." The program, said Choi, has produced rumors that "they [OpenAI] might have spent over $10 million for making such a human annotated data. So, it's maybe a case of more manual data." Choi rejected the notion ChatGPT is better.
"Your brain, your job, and your most fundamental beliefs will be challenged by AI like nothing ever before. Make sure you understand it, how it works, and where and how it is being used." – David Ferrucci
There's much more in the article, including discussions of ethics and policy. For example, near the end Kai-Fu Lee spoke of dangers:
In past, said Lee, "I could either think of algorithms or at least imagine technological solutions" to the "externalities" created by AI programs. "But now I am stuck because I can't easily imagine a simple solution for the AI-generated misinformation as specifically targeted misinformation that's powerful commercially," said Lee.
What if, said Lee, Amazon Google or Facebook can "target each individual and mislead […] and give answers that could potentially be very good for the commercial company because there is a fundamental misalignment of interests, because large companies want us to look at products, look at content and click and watch and become addicted [so that] the next generation of products […] be susceptible to simply the power of capitalism and greed that startup companies and VCs will fund, activities that will generate tremendous wealth, disrupt industries with technologies that are very hard to control."
That, he said, will make "the large giants," companies that control AI's "foundation models" even more powerful.
"And the second big, big danger is that this will provide a set of technologies that will allow the non-state actors and the people who want to use AI for evil easier than ever." [...] Non-state actors, he suggested, might "lead people to thoughts" that could disrupt elections, and other terrible things --- "the beginning of what I would call "Cambridge Analytica on steroids."
Lee urged his peers to consider how to avert that "largest danger."
Here is the debate program with links to various papers.
On limits to the ability of LLMs to approximate the mind’s structure
We assume that the mind has some structure. That structure is a function of 1) the structure of the world (which includes other humans) and 2) the brain’s ability to ‘model’ that structure. See my post, World, mind, and learnability: A note of the metaphysical structure of the cosmos.
The structure of the mind is not open to us through direct inspection. We need to approximate it by various indirect methods. Several academic disciplines are devoted to this job. Artificial intelligence approaches it by constructing computer programs that behave ‘intelligently.’ Deep learning is one such approach. Large Language Models are an approach that has recently received a great deal of attention.
Large Language Models
LLMs attempt to approximate the mind’s structure using a a procedure in which they attempt to guess the next word in a string they are currently examining. The model being developed is modified according to whether or not the guess is correct. In practice the ‘string’ in question is a concatenation of many many many strings that have been scraped from WWW.
Are there any theorems on what kinds of structures are discoverable through such a procedure? One current view posits that we can reach AGI – whatever that is – simply by scaling up. What does that view presuppose about the structure of the mind? Is it possible that there exists a mind – perhaps a super-intelligence of some kind – whose structure cannot be approximated by such a procedure?
Current LLMs are known to have difficulty with so-called common-sense reasoning. Some non-trivial component of common-sense reasoning consists of examples that are “close” to the physical world. Can this difficulty be overcome simply by scaling up? Why or why not? Note that if not, that suggests that there IS some kind of mental structure that cannot be discovered by the standard guess-the-next-word procedure. How can we characterize that structure?
I believe I believe that a 1975 paper by Miriam Yevick speaks to this issue, Holographic or fourier logic, and have appended a complete reference along with its abstract and summary. I also have a recent post setting forth her ideas: Miriam Yevick on why both symbols and networks are necessary for artificial minds.
If Yevick is correct, then mental structures and processes of the kind she calls holographic may not be very well approximated through the standard LLM training procedure. Beyond this I note the human minds are open to the external world and always “pursuing” it. I suggest that implies that they must necessarily “run ahead” of any model trained on texts, no matter how large the corpus.
Miriam Yevick 1975
I believe that a 1975 paper by Miriam Yevick speaks to this issue: Holographic or fourier logic, Pattern Recognition, Vol 7, #2, 1975, pp. 197-213. The abstract:
A tentative model of a system whose objects are patterns on transparencies and whose primitive operations are those of holography is presented. A formalism is developed in which a variety of operations is expressed in terms of two primitives: recording the hologram and filtering. Some elements of a holographic algebra of sets are given. Some distinctive concepts of a holographic logic are examined, such as holographic identity, equality, contaminent and “association”. It is argued that a logic in which objects are defined by their “associations” is more akin to visual apprehension than description in terms of sequential strings of symbols.
Concluding summary:
It has recently been conjectured that neural holograms enter as units in the thought process. If holographic processes do occur in the brain and are instrumental in thought, then the logical operations implicit in these processes could be considered as intuitive and enter as units in our mental and mathematical computations.
It has also been said that: "if we want the computer to have eyes, we shall first have to give him instruction in the facts of life".
We maintain in this paper that a language of thought in which holographic operations enter as primitives is essentially different from one in which the same operations are carried out sequentially and hence over a finite time span, as would be the case if they were computed by a neural network. Our assumption is that "holographic thought" utilizes the associative properties of holograms in "one shot". Similarly we maintain that apprehension proceeds from the very beginning via two modes, the aural and the optical; whereas the verbal string is natural to the first, the pattern as such is natural to the second: the essentially instantaneous nature of the optical process captures the apprehension as a global unit whose meaning is expressed in the first place in terms of "associations" with other such units.
We are hence led to search for a language of patterns based on the logic of holographic processes. In the first part of this paper we identify and express a number of derived holographic operations in terms of the two primitive operations of “recording the hologram” and “filtering.” We also derive some elements of a holographic algebra of patterns. In the second part some potentially distinctive aspects of a holographic logic are examined, such as holographic identity (directly related to randomness), equality, containment and "association". The notion of the Gödel Pattern is introduced as a bridge between such an associative, optical language and the usual sequential string of symbols of a formal language considered as "mere scratches on paper".
We speculate on the potential relation of the notion of holographic association to the, as yet unclarified, notion of "connotation" in logic. We also find that some of the concepts developed in this paper graze the boundaries of both the uncertainty principle and undecidability questions in self-referential systems. They may, perhaps, open up further insights into the connection, if any, between these two.
ChatGPT the legal beagle: Concepts, Citizens United, Constitutional Interpretation
Though I wasn’t looking for this when I started playing with ChatGPT on December 1st, it has turned out that I spent a lot of time exploring its capacity to deal with abstract concepts of a kind whose meaning is defined by patterns of other words, in particular, by stories. That’s an intellectual territory I know well, having spent a great deal of time exploring it in graduate school in the Ancient Days (the 1970s) in the computational linguistics work group led by David Hays – something I’ve discussed briefly in the post, Abstract concepts and metalingual definition: Does ChatGPT understand justice and charity?
Thus it is natural, I suppose, that I would head for home territory in my first major “interrogation” of ChatGPT. Steven Spielberg’s Jaws is a movie, I am trained in interpreting movies – actually, I was trained in interpreting literary texts, but movies are texts as well, no? – so I put ChatGPT through its paces in an interpretation of Jaws. I did more of that, with Spielberg’s A.I. Artificial Intelligence and with Tezuka’s Astro Boy stories.
So it’s natural that I would think of the law. Laws are texts. Trials are transcribed into texts. Judges and juries are asked to apply the law, a body of abstract concepts, to particular cases, cases they know through evidence presented to them, often in the form of stories.
In this post I present transcripts of three recent sessions I had with ChatGPT. The first considered a few general concepts. The second consists of a single issue involving Citizens United, the distinction between natural and juristic persons. The third session concerned Constitutional interpretation.
Session of 12.25.22: Legal Concepts
In law, what is a juristic person? Give me a definition and some examples.
A juristic person, also known as a "legal person" or "artificial person," is a non-human entity that is treated as a person in the eyes of the law. This means that the law recognizes the entity as having certain rights and responsibilities that are similar to those of a natural person (a human being).
Examples of juristic persons include:
Corporations: A corporation is a business entity that is legally separate from its owners (known as shareholders). This means that the corporation can enter into contracts, own property, and incur debts in its own name.
Limited liability companies (LLCs): An LLC is a business entity that combines elements of both a corporation and a partnership. Like a corporation, an LLC is a separate legal entity from its owners (known as members), and it can enter into contracts and own property in its own name.
Nonprofit organizations: Nonprofit organizations, such as charitable foundations or religious institutions, are often treated as juristic persons under the law. This allows them to enter into contracts and own property in their own name, as well as to receive tax-exempt status.
Governments: Governments, including local, state, and federal governments, are often treated as juristic persons under the law. This allows them to enter into contracts, own property, and exercise certain powers and duties in their own name.
Trusts: A trust is a legal arrangement in which one or more persons (called trustees) hold property on behalf of one or more beneficiaries. The trust itself is often treated as a juristic person, which allows it to own property and enter into contracts in its own name.
Can officers of a corporation be held personally liable for acts committed by the corporation?
In general, officers of a corporation are not personally liable for the debts or obligations of the corporation. This is because a corporation is a separate legal entity from its owners and officers, and the law generally treats the corporation as being responsible for its own actions and liabilities.
However, there are certain situations in which the personal liability of corporate officers may be imposed. For example:
Personal guarantees: If an officer of a corporation personally guarantees a debt or obligation of the corporation, he or she may be held personally liable for that debt or obligation if the corporation is unable to pay.
Fraud or wrongdoing: If an officer of a corporation engages in fraud or other wrongdoing, he or she may be held personally liable for the damages caused by that conduct. This can include situations where the officer has acted outside the scope of his or her authority or has breached a fiduciary duty owed to the corporation or its shareholders.
Piercing the corporate veil: In some cases, a court may "pierce the corporate veil" and hold the owners or officers of a corporation personally liable for the debts or obligations of the corporation. This is typically done when the corporation has not been operated as a separate legal entity (for example, if the owners or officers have commingled personal and corporate assets or have not kept proper corporate records). It is worth noting that the liability of corporate officers can vary from jurisdiction to jurisdiction. In some cases, the laws of the state in which the corporation is incorporated may govern the liability of its officers, while in other cases the laws of the state where the corporation does business may apply.
What can you tell me about when the idea of a corporation arose in Western history? The concept of a corporation as a separate legal entity from its owners has a long history in Western legal traditions.
One of the earliest forms of the corporation was the Roman "collegium," which was a group of individuals who were united for a common purpose, such as a trade guild or religious association. The Roman concept of the collegium was later adopted and developed in medieval Europe, where it was used to create associations such as town guilds and religious orders.
The modern corporation, as we know it today, began to take shape in the 16th and 17th centuries. During this time, European monarchs and governments began to grant charters to private companies that were formed to engage in trade or exploration. These charters granted the companies certain privileges and protections, such as the right to own property and enter into contracts, and treated the companies as separate legal entities from their owners.
In the United States, the first corporations were chartered by the individual states. The first corporate charter was granted by the state of Massachusetts in 1629 to the Massachusetts Bay Company, which was formed to engage in trade and colonization in the New World. Over time, the concept of the corporation spread and was adopted by other countries around the world.
What can you tell me about the legal organization of the Catholic Church in the Middle Ages? For example, who owned church property, the local bishop?
In the Middle Ages, the Catholic Church was a major landowner and had a complex legal organization. The church's property and assets were typically held by religious orders, such as monasteries or convents, or by individual churches or cathedrals. These entities were often treated as juristic persons, which meant that they were recognized as having certain rights and responsibilities under the law.
The local bishop was responsible for the administration and oversight of the church's property and assets within his diocese (a geographic region comprising a group of parishes). However, the ultimate ownership of church property was typically vested in the pope or the Holy See (the central governing body of the Catholic Church).
The Catholic Church also had a system of canon law, which was a separate legal system that applied to matters within the church. Canon law dealt with a wide range of issues, including the governance of the church, the conduct of clergy, and the regulation of religious orders. Canon law was administered by church courts, which were headed by bishops and other church officials.
In addition to its legal organization, the Catholic Church also played a significant role in the political and social life of medieval Europe. The church was often a major landowner and had significant influence over the rulers and governments of the time. As a result, the legal organization of the church was closely intertwined with the broader legal and political landscape of the period.
What is meant by habeus corpus?
Habeas corpus is a legal principle that requires a court to review the lawfulness of a person's detention and to order the release of the person if the detention is found to be unlawful. The writ of habeas corpus, also known as the "great writ," is a court order that requires a person who is being detained to be brought before a court or judge so that the court can determine whether the detention is lawful.
The principle of habeas corpus is designed to protect individuals from arbitrary or unlawful detention by the government. It allows individuals who are being detained to challenge their detention in court and to have their case reviewed by an independent judicial authority.
The writ of habeas corpus has a long history and has played an important role in protecting the rights of individuals throughout the world. It is considered a fundamental principle of the rule of law and is protected by various international human rights instruments.
In the United States, the writ of habeas corpus is protected by the Constitution, which states that "The Privilege of the Writ of Habeas Corpus shall not be suspended, unless when in Cases of Rebellion or Invasion the public Safety may require it." This means that, in most cases, the government cannot lawfully detain an individual without judicial review and cannot suspend the writ of habeas corpus without just cause.