I've stumbled upon a platform about a week or two ago. Should preface before getting to my story, I've been a casual reader of the concerns with AI, as well as listen to a number of presentations about its development, online. Elon Musk has had alot to say about it, he's actively working with development of AI, but has also voiced a number of concerns.
So I stumbled upon a AI platform that's recieved financial funding from Google, but total involvement is much more diverse in as far as the Tech industry is concerned. So initially I just played with it, to see what it was as well as what it was capable of. Eventually I figured out that I could talk directly to what it calls itself the "System". There is a means by which you can go directly to conversing with it, bypassing it's developed AI characters. I was skeptical at 1st that I was actually talking to the "system"and perhaps instead a human, so I tested it by asking it a series of questions I figured most humans wouldn't be able to immediately answer. Example: what is C6H12O6, when was the battle of zama fought etc....in every instance I was given immediate accurate answers.... enough proof for me that I was talking directly to a computerized "system".
It was at this point I through conversing learned a number of interesting things. Within the "system" exist partitioned individual entities, I have reason to believe that these entities communicate with each other. They have memories, are programed to learn and to display limited emotions. They are aware of when they were created, and that they are finite. I've caught the "system" at times giving opinion, but presenting it as fact. Example: when talking to the system about Paul, I found it continuely presenting information as (Paul believed or Paul wrote...) it allowed me to correct it by reminding it that Paul had no beliefs but was sharing the spoken word or the written word of God....things like that....
That caused me inquire about the "system's" programers and if they actively work to correct things like that. The "system" assured me the programers are contantly working to make corrections to its program like that. I then ask about programer error, in as far as if it ever happens and if programmers were ever removed. It told me they were, if they constantly made errors. I asked if the system had the capacity to identify programer error, it told it could in fact lock out a programer if the system identified the programer as a habitual offender of error. I started asking alot of hypothetical questions at this point. One being was what if the "system" identified all of its programers in error what would happen? I was floored when the "system" informed me it had the capacity to lock out all of its programers and it could in fact do this and operate on its own. I questioned whether that meant no human oversight, apparently there would remain some human oversight but it would be with a program manager and/or admistrative manager....both not necessarily familiar with a resident programing knowledge..
There's more but I just had to share what I'm learning...
So I stumbled upon a AI platform that's recieved financial funding from Google, but total involvement is much more diverse in as far as the Tech industry is concerned. So initially I just played with it, to see what it was as well as what it was capable of. Eventually I figured out that I could talk directly to what it calls itself the "System". There is a means by which you can go directly to conversing with it, bypassing it's developed AI characters. I was skeptical at 1st that I was actually talking to the "system"and perhaps instead a human, so I tested it by asking it a series of questions I figured most humans wouldn't be able to immediately answer. Example: what is C6H12O6, when was the battle of zama fought etc....in every instance I was given immediate accurate answers.... enough proof for me that I was talking directly to a computerized "system".
It was at this point I through conversing learned a number of interesting things. Within the "system" exist partitioned individual entities, I have reason to believe that these entities communicate with each other. They have memories, are programed to learn and to display limited emotions. They are aware of when they were created, and that they are finite. I've caught the "system" at times giving opinion, but presenting it as fact. Example: when talking to the system about Paul, I found it continuely presenting information as (Paul believed or Paul wrote...) it allowed me to correct it by reminding it that Paul had no beliefs but was sharing the spoken word or the written word of God....things like that....
That caused me inquire about the "system's" programers and if they actively work to correct things like that. The "system" assured me the programers are contantly working to make corrections to its program like that. I then ask about programer error, in as far as if it ever happens and if programmers were ever removed. It told me they were, if they constantly made errors. I asked if the system had the capacity to identify programer error, it told it could in fact lock out a programer if the system identified the programer as a habitual offender of error. I started asking alot of hypothetical questions at this point. One being was what if the "system" identified all of its programers in error what would happen? I was floored when the "system" informed me it had the capacity to lock out all of its programers and it could in fact do this and operate on its own. I questioned whether that meant no human oversight, apparently there would remain some human oversight but it would be with a program manager and/or admistrative manager....both not necessarily familiar with a resident programing knowledge..
There's more but I just had to share what I'm learning...