open letter asking for a 6-month moratorium on AI:
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive researchand acknowledged by top AI labs.As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control. Contemporary AI systems are now becoming human-competitive at general tasks,and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
open letter asking for a 6-month moratorium on AI:
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research<1> and acknowledged by top AI labs.<2> As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one â not even their creators â can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks,<3> and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
for anyone paying attention, this is it
asking better questions/prompts gets better answers
i heard the term "prompt engineer" as in we need
so there are prompt generators popping up everywhere
this tech is a tool and it generally will do what you ask (within reason)
you just need to know how to ask/tell it what you want (in its language/understanding)
Another interesting iteration in the scientific writing assignment was when I asked it to provide a reference list for the paper. It did, they were all fake. Real authors, real journals and books, just not the papers those authors had written. So, it gathers information but it really doesnât seem to know where it gets it from.
The secret sauce post below is an interesting recipe.
yes models usually have an input and an output
the old saying garbage in = garbage out
there's not so much a shortage of info to go in, but more an issue of filtering
probably one of the reasons i'm really excited about stuff like wolfram alpha plugged into one of these models
My take on AI (as an information goddess): Use it as A tool, not your only tool. And in regards to paper writing, if you aren't particularly interested in learning anything on the topic (or are so self-absorbed as to think you already know it all), use it for first drafts NOT final products.
Being self informed is golden except when it's not. I have learned so much that is wrong in my knowledge and opinions just by realizing that learning never ends and furthering my own inner wizard by allowing myself to be eternally curious.
Location: Blinding You With Library Science! Gender:
Posted:
Mar 26, 2023 - 9:13am
My take on AI (as an information goddess):
Use it as A tool, not your only tool.
And in regards to paper writing, if you aren't particularly interested in learning anything on the topic (or are so self-absorbed as to think you already know it all), use it for first drafts NOT final products.
Another interesting iteration in the scientific writing assignment was when I asked it to provide a reference list for the paper. It did, they were all fake. Real authors, real journals and books, just not the papers those authors had written. So, it gathers information but it really doesn’t seem to know where it gets it from. The secret sauce post below is an interesting recipe.
Yes. The ChatGPT recipe as outlined is all about getting a better result from the AI. The same can be said of Google too - its often all in how you ask the question / conduct the search. We've all seen many people who can't manage to use the very basics of Google to find an answer to a question. I use Google, or other search engines multiple times a day. It's a skill like anything else. With the new emerging AIs , they're certain to make life easier for those who can't master Google - but again, they may also never master the AI.
Absolutely. I'm currently not interacting directly with any of the platforms, but if I were to experiment the first thing I would do is vary the way and words I used and repeat same fundamental task over and over just to see the results. Maybe not vary. Just repeat initial request to see if result is same or evolves an answer.
It somehow transitioned from an open source non-profit to a closed source for-profit.
Elon is again a bit loose with the truth. The "somehow" still haunts him...obviously...
Elon was involved with OpenAI very early on (~2015/2016). He left the board in 2018. Here's the comment from OpenAIwhen he departed. " Elon Musk will depart the OpenAI Board but will continue to donate and advise the organization."
Elon had committed $1B to OpenAI, a non-profit and what was intended to be an open-source tool. At some point, he became unhappy with the progress and suggested that he should run the company. That was rejected, so he left the board. After leaving, Elon backed out of the commitment (ultimately giving $100M, but not enough to support the organization). The new board decided that they couldn't generate the necessary capital as an NFP, so they incorporated it in 2019. In 2020, Microsoft invested $1B.
$1B in 2019 for a company worth $30B today, or $42B last fall for a company worth $10B today (for stock option purposes, Elon valued the company at $20B this week) . Elon's investment decisions recently suggest he's not as smart as the press clippings he reads.
What's really interesting...OpenAI is the hottest company in the world right now, and the CEO has no equity. It's not the typical start-up.
Another interesting iteration in the scientific writing assignment was when I asked it to provide a reference list for the paper. It did, they were all fake. Real authors, real journals and books, just not the papers those authors had written. So, it gathers information but it really doesnât seem to know where it gets it from.
The secret sauce post below is an interesting recipe.
Yes. The ChatGPT recipe as outlined is all about getting a better result from the AI.
The same can be said of Google too - its often all in how you ask the question / conduct the search. We've all seen many people who can't manage to use the very basics of Google to find an answer to a question. I use Google, or other search engines multiple times a day. It's a skill like anything else. With the new emerging AIs , they're certain to make life easier for those who can't master Google - but again, they may also never master the AI.
Another interesting iteration in the scientific writing assignment was when I asked it to provide a reference list for the paper. It did, they were all fake. Real authors, real journals and books, just not the papers those authors had written. So, it gathers information but it really doesnât seem to know where it gets it from.
The secret sauce post below is an interesting recipe.
for anyone paying attention, this is it
asking better questions/prompts gets better answers
i heard the term "prompt engineer" as in we need
so there are prompt generators popping up everywhere
this tech is a tool and it generally will do what you ask (within reason)
you just need to know how to ask/tell it what you want (in its language/understanding)
I had planned to have students in a class write a short literature review on some physiographic provinces in western US. Have done this for a few years. First draft is typically pretty dreadful, then I edit/suggest and send back for final. Still often not that great (this is a common problem at many universities). So, I did the assignment with ChatGPT this morning. First draft was pretty good with some bogus info. I went back and told it to write a more scientific version. In less than 2 minutes I had a passable paper (it did not pass review by GPTzero however). I told it to rewrite as a scientific paper written by a human. GPTzero only detected about 40% of text as AI. I'm glad I'm retiring in a few weeks...
for anyone paying attention, this is it
asking better questions/prompts gets better answers
i heard the term "prompt engineer" as in we need
so there are prompt generators popping up everywhere
this tech is a tool and it generally will do what you ask (within reason)
you just need to know how to ask/tell it what you want (in its language/understanding)
That is awesome. Is it possible that AI can help refocus our humility in regards to life? Like a compass? Or a telescope?
I had planned to have students in a class write a short literature review on some physiographic provinces in western US. Have done this for a few years. First draft is typically pretty dreadful, then I edit/suggest and send back for final. Still often not that great (this is a common problem at many universities). So, I did the assignment with ChatGPT this morning. First draft was pretty good with some bogus info. I went back and told it to write a more scientific version. In less than 2 minutes I had a passable paper (it did not pass review by GPTzero however). I told it to rewrite as a scientific paper written by a human. GPTzero only detected about 40% of text as AI. I'm glad I'm retiring in a few weeks...
That is awesome. Is it possible that AI can help refocus our humility in regards to life? Like a compass? Or a telescope?
Not just yet, apparently. :-)
So that micro/macro level shaping is based on what? Decisions. Choices. Learned through environment and experiences. Interactions. Focus and discernment. "Agency" One of the older newcomer words relative to these sorts of discussions. An important word. Implying interaction through decisions even without choice. Hows and whys. What does it all look like when fed into a mechanical brain the size of humanity. I mean, juxtaposed with the information age.