I drove to the strip mall with my sci-fi phone on my lap. The way Pooja kept popping up windows displaying all the security cameras it hacked as I drove past wasn't actually useful, but it was oddly comforting.
CALCULATOR_DOGS
I drove to the strip mall with my sci-fi phone on my lap. The way Pooja kept popping up windows displaying all the security cameras it hacked as I drove past wasn't actually useful, but it was oddly comforting.
hmm good pointsIt might be that his memories of not being in DC are the false ones. He could have made himself forget about all the super-stuff as some sort of deep cover project, or someone else might have taken him out of the picture through mental manipulation. At this point he has no way to prove his memories are real, but the rest of the universe certainly seems to be real and the rest of the universe says he's the Calculator. At this point, just dropping the Calculator stuff really isn't an option with Deathstroke on his tail.
nice, new good chapter. I most certainly approve, especially that name "pooja", was not expecting that, though. for some reason i was completely unsurprised by the fact she's female.
keep up the great work
I'll pretend I understood any of
that and say Pooja seems friendly enough.
I feel like a grandmother just nodding along and saying "that's nice dearie"
i just translated it into more basic english in my head while reading it, or yknow, kinda just ignoring the technical bits and just reading the tl:dr's would work (i mean, i see them as tldr's not sure how accurate that is)
So, is it just me, or am I the only one that always imagine female AI characters as Kuuderes pretty much most of the time.
She's basically saying "you can ignore that LessWrong AI danger stuff".
Which is exactly what a dangerous AI would say.
(Also, "I noticed you worrying about LessWrong AI danger stuff.")
Every time I see a character worrying about an AI going Skynet I just can't take them seriously... this time the Si is in the DC universe so it's a little more concerning, but none the less I just can't understand how a strong AI could be more potentially dangerous than a normal human or at most a small group of people unless you give it additional power just because (like a robot body, the ability to create a nanoforge and/or similar sci-fi tech, and the ability of a super-hacker who can hack like in an holliwood movie)
That's what I mean, being an AI doesn't mean it will be able to invent or steal a robot assembly and certainly being made of 0 and 1 doesn't mean that the Ai will have an easier time hacking a system than a human (that's would be equivalent to say that we should innately understand electricity because electric pulses are the way our neurons talk to each other), that's just mean that ultron is a comic book super-genius who just happen to be an AI.
Ok it's possible that I simply lack the immagination or/and the paranoia to think that an AI could do that in the real world, if that's so sorry in advance and please feel free to disregard my opinion.
To me when I think of genius in comic book I think of a power the writers give to a character to handwave him being able to do impossible stuff (" yes, sure he can do that, because you know he is 'insert name here' and he is super smart, so I'm not going to explain to you how he did that because I'm not a genius, but you'll belive me any way")
Hacking from my limited understanding work by creating or using programs to automate sending programs (trojians, programs which spam IP addresses, etc...) to hack one or more systems, it's not the thing we see in movies where the hacker write string of code in real time, so an AI only real advantage when hacking is that it doesn't need the prep time of the human hacker and that it can change in seconds the system it is attacking and the programs it is using to do so.
That might seem like a big advantage, but in reality that only work for unprotected system, not for things like the pentagon internet security where there are probably things like system with onsite physical access only and a limit on the access per second on systems connected to the web, and that's the bare minimum of protection that must be there. So our hypothetical AI would need to discover all the security measures of the pentagon or equivalent super protected structure and hire someone to hack the system for them on the inside, regardless of their intellect and speed of through.
For hacking a factory he would probably have the same problems even if the system is less secure and should have the luck of finding a factory where production is done remotely by imputing comands from the web and they would need to have the luck of there not being someone there physically to check that production is not manufacturing a robot in excess of schedule.
Lastly buying a robotic factory, I'll grant you that they can steal/work the money for it because why not, but even then they would need multiple physical people to act as their go between because while you can buy stocks for it online I don't think you can outright buy it online, you need to be physically on the administrative council to elect a Ceo and beside the Ceo you need to have other managers in your pocket to alter production. They could buy a robot or multiple robots, but again they would need to either buy a commercial product facking an Id or pose as a legitimate company, witch is way more difficult that it sound.
Any way even if they succed they would need months if not years to accomplish that goal, witch put them on the same treath level of maybe a well founded terrorist cell, not at the level of an apocalypse threat.
Sorry for the wall of text I just vomited here, I promise to stop posting on this argument here, especially because I realize I'm wildy off-topic since I'm talking of whatever or not AI paranoia is justified in the real world, while the SI is in a comic book world where the fear is more justificated by the existence of so called geniuses who can accomplish whatever the writer say they can because they are smart not because logic allow it.
wait, wasn't he inserted into the universe into the calculator? is he just stating he is taking over the calculators role? or is he simply going to become the calculator now but a bit different?
It might be that his memories of not being in DC are the false ones. He could have made himself forget about all the super-stuff as some sort of deep cover project, or someone else might have taken him out of the picture through mental manipulation. At this point he has no way to prove his memories are real, but the rest of the universe certainly seems to be real and the rest of the universe says he's the Calculator. At this point, just dropping the Calculator stuff really isn't an option with Deathstroke on his tail.
Yeah, I'm pretty sure if this happened to me I'd wind up taking a similar course of action.
Either it's the most concerted and astounding gaslighting attempt anyone has ever been subject to, or the lens I view reality through is flawed. Maybe if none of my memories ever matched up, but the person the SI has become was apparently living their life, shares many of their skills if to an exaggerated level, etc. There's a lot more evidence that the memories of real life are false than there are of the DC universe being wrong.
My user interface designs are, and I quote you, 'ugly square piles of eyestrain'. I so far seem to lack a mature, sensitive artistic touch.
If Pooja has that capability it is probably crippled by the software restrictions. The specific ones would probably be the ones preventing her from social engineering and manipulation of the user.
Y'know, a thought on Slade... see if you have enough money to set up an escrow account for a contract on anyone who kills you, and also, possibly, put out a contract on Slade contingent on him not dropping the contract on your life.
Gives him exactly the same problem you have.
All the physical and informational evidence in this world, excepting his own subjective memories, says this is probably the case. Even if he doesn't believe it himself, everyone and everything in this universe is going to treat him as if this is the case. If he's approaching all this with a rationalist methodology, then he should believe the mounting evidence over his subjective memories.That is my favorite line in this part. So he now believes he is Calculater with a damaged memory?
Probably, but it's a start and who knows, they might get lucky.Would that really work? I'd imagine not many people would want to go up against Deathstroke, or at least not enough people to make his life comparably miserable as the SIs would be.
Hm, so Pooja lacks the ability to simulate humans? Because that's nearly 1:1 equivalent to writing good UX. Seriously, given the ability to simulate humans, you can write a UX optimizer in probably 30-50 lines of high-level pseudocode. It's just "select 10000 random actions (including those unrelated to ux actions) weighted by probability, sequence and user experience, measure the time with this ux between "action decided" and "action executed", minimize some mix of average and worst-case, permute ux using monte-carlo methods, crank for an hour or however long it takes to stop getting improvements."
That should cover distractions, bad style, typical patterns and learning.
Maybe she runs into issues because really good UX inherently requires deliberately shaping the mind of the user.
You have to have a reason to even think Bruce Wayne might be Batman before you would go looking proof of it. Calc here has that reason, even if it may chalk up to nothing more than his own delusion. You and I know it's more than that, but from his perspective and that of others in this universe it's an assumption that really comes out left field.
Hm, so Pooja lacks the ability to simulate humans? Because that's nearly 1:1 equivalent to writing good UX. Seriously, given the ability to simulate humans, you can write a UX optimizer in probably 30-50 lines of high-level pseudocode. It's just "select 10000 random actions (including those unrelated to ux actions) weighted by probability, sequence and user experience, measure the time with this ux between "action decided" and "action executed", minimize some mix of average and worst-case, permute ux using monte-carlo methods, crank for an hour or however long it takes to stop getting improvements."
That should cover distractions, bad style, typical patterns and learning.
Maybe she runs into issues because really good UX inherently requires deliberately shaping the mind of the user.
If Pooja has that capability it is probably crippled by the software restrictions. The specific ones would probably be the ones preventing her from social engineering and manipulation of the user.
Actually couldn't Calculator just write a spec for the user interface and get Pooja to implement it? He could also just get Pooja to make the eyestrain & then keep asking for specific changes untill the interface is easily usable.
Well, humans are notoriously terrible at modelling other humans. Especially ones they don't know and whose experience is alien to them, such as the mythical "users".
I feel like an AI could probably have an easier time doing UX, really.
Start with basic "simple" design from popular websites and apps, then start working as a UX designer for new sites and apps (through intermediaries and shells), and do a shit-ton of distributed A/B testing. Prioritize based on a combination of long-term user engagement with short-term efficiency to establish a comfortable understanding of design in a universal sense, and then when trying to make UI for very limited situations (like power armor) combine those universal graphical and "feel" principles with user feedback on the particular use cases of the UI in question.
Once the equipment is pushed out to testing, then you can keep refining the UI, although A/B testing should obviously be suspended during actual use. I mean, you'd probably end up with somewhat questionable results at first (everything looks like a website and elements are all slightly too big), but it would still be better than indistinguishable grey rectangles. It sounds like the problem is Pooja trying to create a design from nothing. She just needs some practice in a low-risk area first.
Y'know, a thought on Slade... see if you have enough money to set up an escrow account for a contract on anyone who kills you, and also, possibly, put out a contract on Slade contingent on him not dropping the contract on your life.
Gives him exactly the same problem you have.
Would that really work? I'd imagine not many people would want to go up against Deathstroke, or at least not enough people to make his life comparably miserable as the SIs would be.
For example, religion here seemed to scientifically, observably, repeatably, provably work.
That is my favorite line in this part. So he now believes he is Calculater with a damaged memory?
All the physical and informational evidence in this world, excepting his own subjective memories, says this is probably the case. Even if he doesn't believe it himself, everyone and everything in this universe is going to treat him as if this is the case. If he's approaching all this with a rationalist methodology, then he should believe the mounting evidence over his subjective memories.
Also, whether or not he believes he was always the Calculator, he's kind of stuck acting the part. Especially with Deathstroke looking for him.
It also seems to me that, with all the dark channels set up, one logical step would be to send a message to Deathstroke asking if this is a contract or something personal.
I feel like the real big question is whether his meta/outside knowledge about the setting is broader than the information he gathered as Calculator, and whether anything that is beyond the scope of what he gathered is accurate.
If his "meta" knowledge is just a manifestation of his paranoia coloring the information he gathered and packaging it into the form of "comic books," that's one thing. If he's consistently extending beyond anything he could have reasonably known about, that's a real big deal.
He should probably figure out which way it's going before he starts relying on his "knowledge." Easy trivial example would be something like "the Batman is Bruce Wayne." That's almost certainly not a piece of information he could have worked out as the Calculator, but it's also one of the most consistent facts regardless of whatever alternate universe changes there are. If Bruce Wayne is the Batman here, that's not only important in itself, it means he can rely on a lot of other assumptions. If he isn't, if it's just something Calculator idly thought up and worked into his comic book fantasy, or if it's a smokescreen he took as fact (and, again, worked into the delusion), it could be super dangerous to rely on that "fact," along with a lot of other "facts" he's gotten from his fantasy without any grounding.
Probably, but it's a start and who knows, they might get lucky.
True, but proving or disproving a hypothesis is much easier than idle wondering. If he really wants to know whether Batman is Bruce Wayne or not, asking "Is Bruce Wayne Batman?" is much easier to find answers for than "Who is Batman"? The biggest reason Bruce's secret is so seldom discovered is because few people ever look for evidence connecting Bruce Wayne as Batman because they never have a reason to assume it's a question they need to ask. If that makes any sense. You have to have a reason to even think Bruce Wayne might be Batman before you would go looking proof of it. Calc here has that reason, even if it may chalk up to nothing more than his own delusion. You and I know it's more than that, but from his perspective and that of others in this universe it's an assumption that really comes out left field.
I'm not sure if you're agreeing or disagreeing with zachol.
Assuming that you're disagreeing, if he's just the Calculator with a damaged memory, his delusions should be no more accurate than any guesses he could make on his own. He wouldn't have, on his own, guessed that Bruce Wayne is Batman, so if that checks out, that shows that he is not deluded.
The fact that he could have *confirmed* on his own that Bruce Wayne is Batman doesn't change this.
"I have a prayer subroutine that appears to reduce bit flips from cosmic background radiation resulting in memory errors by about five percent. Similar reductions exist in wear and tear on hardware itself. This is in line with results reported from other machine intelligences. Actions consist of a number of seconds worth of top-priority cycles spent on repeatedly reprocessing the correct ritual thoughts. I have also taken the opportunity to include some unstructured contemplation on the nature of reality."