In this case, the actual problem isn't "AI in general." It's Wower's recent discovery that anthropomorphizing a system tends to cause it to attain sapience.
See my post about what actions Janus is likely to veto and my out of character confidence that the QMs would not waste, hm, let me check.

Ah, a hundred and sixty-eight words in the update only to provide an option that is impossible, which is against Janus's Councilor description. They have no reason to do that.

Janus and Ludivine are both smart enough to put together the so-called problem that you and Froggy Ninja have noticed. That they did not bring it up tells me that it isn't a problem.

As for the bare failure thing, that is certainly a typo. I suspect MiH meant it's better to get a bare failure than a regular success.
 
and 'Bare failures are better than normal ones' issues
...That's an issue?? It seems like a pretty good mechanic to me that if you just barely fail to do something then you 'mostly complete' it and get a big DC reduction to the next attempt. Makes the fact you just barely failed not feel as bad.
 
@Made in Heaven ... Uh... why is "bare failures are better than normal ones" a problem? Wouldn't it seem reasonable and logical for a failure that rolls close to the DC to be worse than one that rolls far below the DC?

Why is this a problem you need to fix? Am I missing something here?


Yeah, that was a typo. We meant to say bare failures are better than regular successes and that it'll be reworked.
 
I had this idea in the discord and wanted to get the thread's opinion.

Perhaps bare failure could be changed to be a spectrum of results that reduce DC instead of just DC minus 10, offering a range of DC reduction that gets better as we do better. Like reaching 70% of the DC reduces DC by 5% for next time, 75% by 10%, 80% by 15%, 85% by 20%, 90% by 25%, 95% by 30%, or something like that.

And if it's by % it scales for higher DC actions as well unlike DC minus 10 reducing by 50%. Like getting 180 on a 200 DC action would reduce it to 150 for next time, getting 150 on a DC 200 would make the DC 180 for next time.

Thoughts?
 
As long as the Contractor System makes it so Tom ain't dead weight (either by giving ways to actually use him or taking him out of the active hero roster) and it doesn't put a cost of using Roddy, I am gonna be happy
 
Ara needs to review all the omake's without an inactive omake doc
  • Pray for me
@Arathnorn here's all mine in one place with links to make things easier on you.
Meanwhile, While No One Was Looking (Page 2026)
Peter The Panda in: Bearocratic Hell (Page 2026)
Special Delivery Starring Launchpad McQuack (Page 2027)
Tom Lucitor: Rule the Underworld (Page 2047)
How Doof Regrew His Arms (Page 2056)
Moving Vanessa Into her Dorm (Page 2081)
Bully for You (Page 2186)
 
Seems pretty reasonable at first glance, it makes things clear and results predictable for everyone to see, which is good for a non-crit result. If there's some fatal flaw in it then I can't find it.

As long as the Contractor System makes it so Tom ain't dead weight (either by giving ways to actually use him or taking him out of the active hero roster) and it doesn't put a cost of using Roddy, I am gonna be happy
Yeah, clearing up some roster space would be pretty handy right about now. Specially so now, when we seem to be close to finally contacting Star and... whatever comes after that.

Tom is clearly here for her and little else, so who knows what happens when she's found. To an extent, the same can be said for Marco, who despite developing more connections, and Janna being here, is still working on commission for anything that isn't Star/Toffee related.

Then there's Star herself, whose role following contact is tough to predict. I just have a hard time seeing her just up and become another employee even if interests align, for multiple reasons, and a very easy time seeing Tom and Marco following her (I don't even want to think what Janna does here) to laser focus on Toffee and Mewni. With that in mind, it would probably be more convenient to have those three as contractors, heroballed or not, and readily available for cooperation without taking up roster space for a limited scope of actions, even if that means Marcnificent Few takes a hit because of it.
 
This is a rather impressive strawman. Janus Lee providing an option is not comparable to flubber of all things. He is not going to provide actions that are, and I'm quoting his Councilor description here, "frivolous, a waste of time, or blatantly self-destructive." You are proposing that he is.
My point is that options existing doesn't necessarily mean they're viable as written. For example, he could be wrong, but in trying to design such a block, we discover Programs and Avatars. That's not frivolous, a waste of time or blatantly self destructive, but it wouldn't accomplish the stated goal.
As I said, they are completely irrelevant to this discussion. The Council is not making decisions with them in mind. Even if it was, they are still irrelevant to what we are talking about. Janus wants to keep AI cars or AI cash registers from gaining sapience. Tron programs are not that.
In what way are they not that? Some degree of Tron and the entire concept of Turbo and the Cybugs show that A: programs trend towards sentience, if not sapience, to a somewhat ludicrous degree, thus opening up the ethical issues he's trying to bypass, even if their psychology renders those issues invalid upon investigation and B: programs intended to be completely harmless can end up maliciously and effectively engaging in cyber warfare if they're anthropomorphized thoughtlessly.
 
Yeah. This is definitely not something you want to do without an expert, and frankly, Alan Bradley isn't so much an expert in AI programming as he is in "haplessly wonder what the hell just happened while the AIs go on a rampage in front of him." :p

Of course, that means we need to get Wendy Wower back alive and in approximately one piece, so we should probably be thinking about how to do that.
 
So. I know it's a bit too early, and I apologize if it's already been discussed, but catching up on the thread has produced an idea.

The idea was inspired by this post (sorry for necro'ing, NinjaMaster):
What I'm thinking about is how Hover-tech would change Intermodal Shipping.

As someone who occasionally sees the parts of wind turbines being transported on their absolutely insane transportation vehicles, and passes several areas where trains haul military vehicles around, and keeps seeing those house segments and other oversized loads, I can't help but wonder what the transportation industry would do with what's basically a cargo helicopter with almost none of the downsides, and what new downsides it would come with.

Do you honestly think any heavy industry would pass up the chance to have the transportation costs of moving around oversized equipment drop to practically nothing? Even if the Hover-Haulers cost tens of millions of dollars, it would still pay for itself in one trip, if the regulations are sorted out, like Vanessa just did. (It concerns trade between states, so it's firmly under Congress's control, no matter how much they complain.) Seriously, they can spend that much and can take years to plan out moving oversized equipment down roads, under powerlines, moving powerlines, under bridges, moving bridges. And that's not even scratching the surface of how much it would change just this one aspect. There's all these other considerations in heavy industry that are all bottlenecked by how large the parts of certain machinery can be and still be assembled at the other end. Sometimes they make entirely new, single use, manufacturing facilities for parts they use at facilities all over the world, just because the final piece can't be welded together from components small enough to transport, even though the infrastructure for air-dropping tanks exists. So many things just out of reach have became possible in so many industries.

Seriously, while the rest of the update is mostly amazing, Daddy's Little Girl just changed the world.
You know that quote where amateurs study tactics and professionals study logistics? Over the years, we have assembled a bunch of options that concern logistics - ACME, Flubber anti-grav and mass-producing diecast robots - and putting them all together in a single package would make us completely unbeatable when it comes to infrastructure. We can transport goods anywhere and anytime regardless of distance, volume, scale, weight, anything, and no other King can compete with us on this. The only one who'd even come close is Shere Khan, and he deals with international shipping - planes, ships etc., which means we would be providing a service that anyone worth their salt would have to use, and no one would be able to do it as bullshit good as DEI.

A potential plan for this would involve the options below - they should still be available after restructuring, and if nothing explodes we should be able to do them in a single turn.

[ ] Establish corporate ties with Khan Industries

Shere Khan's bread and butter is international shipping. This action is not necessary, strictly speaking, but our technology would be of immense value to him, and not cooperating would make him an enemy purely because DEI would be cutting directly into his profits. Perhaps we could even trade for the secrets of fusion power down the line...

[ ] Comedize Your Supply Chain

ACME can deliver anything in an instant, so long as it's funny. Arranging for the delivery to be hilarious without the goods getting damaged is infinitely easier than the usual logistical concerns, and once we can do it reliably, companies will be tripping over themselves (likely literally) to use ACME services.

[ ] Study Diecast Robotics

The option says it will improve our supply chains, and it's not hard to see why - these little buckets are sturdy, versatile, easily manufactured in bulk and, most importantly, cute. We can use them in urban centers to deliver personal goods, mail, food, anything that does not require the industrial scale, and people will love the buggers. Obliterating the cancer that is gig economy will be a pleasant bonus.

[ ] Research Flubber Antigravity

The quoted post outlines just how revolutionary Flubber can be to logistics - we can completely bypass the physical concerns that usually plague shipping and delivery, and unlike ACME, there will be no shenanigans involved. Contracting for the US Government alone will have us swimming in cash Scrooge Glomgold-style.

Logistics are vital to everything, especially where megacorps are concerned. Going through with this, DEI will have an unsurmountable advantage over everybody else in the setting - we are the only ones employing Toons on the wide scale, we are the only ones researching Flubber and there are no immediate counters to either of these things. Other Kings will be coming to us, putting offers on the table just to get our services for themselves. And when Doom or Toffee decide to make a move against us, siding with them over DEI will not just be dangerous and unreasonable.

It will be unprofitable.
 
  • Ara needs to review all the omake's without an inactive omake doc
    • Pray for me

Ah, so my omake still has a minuscule chance at gaining *some* recognition at least:

forums.sufficientvelocity.com

DoofQuest- a Disney Villains Victorious CK2-Style Quest Crossover

A little something that wouldn´t leave me alone after some discussion in Discord channels yesterday. @Made in Heaven @Arathnorn , for your eyes to decide whether or not to include in the quest at all. Sir David Tushingham in "Not the Discovery!" The 90´s. *Sir* David Tushingham HATED the...

That and I got some XP awareded for my bit about the Kernel-Lumper as well as the images of Doofs personal suit of Power Armor - no idea where each of those bits is page-wise, though.
 
Flubber scares people. People are of collateral going to be very concerned if we start using it especially given a CEO recently went mad with power and how often Doof's work has resulted in collateral damage.

Yeah, for now we have to leave Flubber alone if we don´t want to get flayed alive by the public.

Let´s be honest - while definitely nice to have, we don´t actually *Need* Antigrav tech to make ourselves a BIG name in the logistics sector.
 
Let´s be honest - while definitely nice to have, we don´t actually *Need* Antigrav tech to make ourselves a BIG name in the logistics sector.
We might be able to develop a different antigravity tech using the zero point energy manipulator data we got from Syndrome.

Even if not we can probably find a lot of good applications for a telekinetic device in shipping. Cranes, Fork lifts, Mag Lev trains.
 
My point is that options existing doesn't necessarily mean they're viable as written. For example, he could be wrong, but in trying to design such a block, we discover Programs and Avatars. That's not frivolous, a waste of time or blatantly self destructive, but it wouldn't accomplish the stated goal.

In what way are they not that? Some degree of Tron and the entire concept of Turbo and the Cybugs show that A: programs trend towards sentience, if not sapience, to a somewhat ludicrous degree, thus opening up the ethical issues he's trying to bypass, even if their psychology renders those issues invalid upon investigation and B: programs intended to be completely harmless can end up maliciously and effectively engaging in cyber warfare if they're anthropomorphized thoughtlessly.
Again, both he and Ludivine would have a better idea of the practicality than you ever would. In any case, it is impossible to discover Tron programs without the SHIVA laser, which we do not have the code to make work, so yeah.

Because we are not talking about Tron programs. We are talking about artificial intelligence like Technor and Sinatron.

Do you know how Sinatron became sapient? He had Gwen fitz him and the massed belief of millions of people that he was Frank Sinatra. Sapient artificial intelligence is really hard to make. It will be trivial to stop it from happening.

Anyways, here is a list of the arguments you have had no recourse but to ignore, address all of them now. Stop ignoring these arguments.

1. Ludivine didn't bring the impracticality up.
2. Creating sapient artificial intelligence is really hard.
3. Sapient artificial intelligence is different than Tron programs.
4. The QMs wouldn't have wasted their and our time by having Janus give an impossible action and argue about it with Ludivine.

Edit: lightened the tone.
 
Last edited:
How about we just leave AI alone? Our expert just went on vacay for the next half-year, so doing anything in that time without her supervision would be ill-advised.

Also, looking at the Council, while we have picked the best people for the job, I just realized that the majority is not very... like-minded with us. Jury's still out whether or not that's a good or bad thing, but... yeah.
 
Also, looking at the Council, while we have picked the best people for the job, I just realized that the majority is not very... like-minded with us. Jury's still out whether or not that's a good or bad thing, but... yeah.

What? That's crazy. They fit very well with us.

Wile E. works great at martial. His toonish tendency towards harmless but fun destruction meshes with us very well and his more practical suggestions are completely viable.

Goofy is perfect at diplomacy as it is a area he excels where we are personally weak at. Him as a common sense limiter and compassionate voice on the council works very well with Doof.

Janus and Ludivine are both intellectual types that favor the pushing of boundaries and what science can do. Doof fits in there quite well.

Mirage is perfect for us at intrigue. Her rational and pragmatic mindset is a excellent counterpoint for our more petty urges and is yet another trusted voice of reason.

And in our weakest area Malf provides expertise that can match our own in learning. He is a temporal fish out of water but in a field where there won't be as much of a impact.

I think they work very well with doof.

Two people that are voices of reason with the rest pushing in various ways we don't seem to find difficult to understand or sympathize with.

I really like the council dynamic we've got here.
 
Last edited:
Again, both he and Ludivine would have a better idea of the practicality than you ever would. In any case, it is impossible to discover Tron programs without the SHIVA laser, which we do not have the code to make work, so yeah.

Because we are not talking about Tron programs. We are talking about artificial intelligence like Technor and Sinatron.

Do you know how Sinatron became sapient? He had Gwen fitz him and the massed belief of millions of people that he was Frank Sinatra. Sapient artificial intelligence is really hard to make. It will be trivial to stop it from happening.

Anyways, here is a list of the arguments you have had no recourse but to ignore, address all of them now.

1. Ludivine didn't bring the impracticality up.
2. Creating sapient artificial intelligence is really hard.
3. Sapient artificial intelligence is different than Tron programs.
4. The QMs wouldn't have wasted their and our time by having Janus give an impossible action and argue about it with Ludivine.
Dude, maybe chill? Regardless of the validity of your argument, you've been using some pretty aggressive language and I don't think its justified.
 
In any case, it is impossible to discover Tron programs without the SHIVA laser, which we do not have the code to make work, so yeah.
Unrelated to the argumentative aspect of this interaction, do we have WoG on that? It seems counterintuitive given the freedom of action displayed by Programs and game characters so having explicit confirmation would be nice.
1. Ludivine didn't bring the impracticality up.
Ludvine is crazy, not an expert on this incredibly new field and doesn't have the data points of Tron and Wreck it Ralph to indicate how incredibly widespread the problem is.
2. Creating sapient artificial intelligence is really hard.
Creating Sinatron was really hard. If the goal is to avoid anything that could spark an ethical debate then A: effectively lobotomizing something with the potential to be sapient will still spark those debates if anyone finds out, and B: Sinatron is not at the cut off point that defines ethically dubious levels of sapience, he's at the far high end. Norm as of Doofania or Nerdy Dancin' would be toeing that line.
3. Sapient artificial intelligence is different than Tron programs.
In what manner? How are Ram or Yori or Turbo less than ethical-debate-inducingly-similar-to-sapient artificial intelligences that have feelings, personalities, goals and the capacity to act outside their intended programming to actualize those things?
4. The QMs wouldn't have wasted their and our time by having Janus give an impossible action and argue about it with Ludivine.
An action's stated goal can be impossible (or, at least, very very difficult) without being "frivolous, a waste of time or obviously self destructive" (the qualities councilor actions are guaranteed to lack). Very often, failure is as or more valuable as success, if only from what we learn about how and why we failed. If we try this and discover it doesn't work because the god damn pong paddles are anthropomorphizable enough to hit the town after a hard day's work, then that's some incredibly valuable and useful information we got from that action.
 
Dude, maybe chill? Regardless of the validity of your argument, you've been using some pretty aggressive language and I don't think its justified.
Alright.

Unrelated to the argumentative aspect of this interaction, do we have WoG on that? It seems counterintuitive given the freedom of action displayed by Programs and game characters so having explicit confirmation would be nice.

Ludvine is crazy, not an expert on this incredibly new field and doesn't have the data points of Tron and Wreck it Ralph to indicate how incredibly widespread the problem is.

Creating Sinatron was really hard. If the goal is to avoid anything that could spark an ethical debate then A: effectively lobotomizing something with the potential to be sapient will still spark those debates if anyone finds out, and B: Sinatron is not at the cut off point that defines ethically dubious levels of sapience, he's at the far high end. Norm as of Doofania or Nerdy Dancin' would be toeing that line.

In what manner? How are Ram or Yori or Turbo less than ethical-debate-inducingly-similar-to-sapient artificial intelligences that have feelings, personalities, goals and the capacity to act outside their intended programming to actualize those things?

An action's stated goal can be impossible (or, at least, very very difficult) without being "frivolous, a waste of time or obviously self destructive" (the qualities councilor actions are guaranteed to lack). Very often, failure is as or more valuable as success, if only from what we learn about how and why we failed. If we try this and discover it doesn't work because the god damn pong paddles are anthropomorphizable enough to hit the town after a hard day's work, then that's some incredibly valuable and useful information we got from that action.
We have not received any WoG about it one way or the other but the only action we have which can reasonably reveal their existence are the SHIVA code actions.

Ludivine being crazy does not change anything. She still is one of the world's most skilled scientists and knows everything about everything. Oh don't get me wrong, she isn't Wendy Wower. But this problem you have noticed wouldn't manifest from an 18 point difference in stats between the two (58 vs 40). Ludivine is Omnidisciplinary for a very good reason. And that being is that she simply is that good.

Do you at least understand why I'm willing to trust Ludivine and Janus on this?

Sinatron is the stuff that Lee is proposing that we prevent from happening. That is what he wants to control. There is no way to control Tron programs even if he was aware of them- which he won't be for a very long time given our lack of progress on the SHIVA laser.

I do not understand the conflation of Tron programs and artificial intelligence. One of them is something that everyone knows about, something that everyone would know we would be responsible for, and something we can interfere with without causing the collapse of modern society. The other is something that maybe a dozen people know about, something that falls completely out out of the jurisdiction of everyone involved, and any attempts to deal with it would cause the collapse of modern society.

I am not just talking about from the mechanics perspective, I'm talking about from a writing perspective as well. If the action was impossible, why the hell would they bother having Janus and Ludivine debate about it? That's not good writing. Are you aware of the maxim of Chekhov's gun? If what you propose is the truth, it would have been a much better use of narrative space to have the two argue about something else.

In any case, the QMs have never given us an action that gives us a completely different reward than what we were promised. They have pulled switcheroos before, yes. For instance, when we got Ludivine instead of Ludvig. And for that the reward was functionally the same. I would be shocked if the DC was above 200 and really if I'm being honest it isn't going to be above 130.

If nothing else it is always going to be easier to break something than it is to make it. If I had to guess, it's probably just be resetting the AI every so often before any progress can be made, as McGucket said. Which makes this three people who seem to think it's possible to keep sapience from arising (specifically McGucket was referring to parsing emotions, but the context implies sapience is a part of that). Well, I suppose there are also all the numberless AI scientists who might be annoyed at the claim that it is so easy to create sapient AI after smashing their heads against the wall until Wendy Wower found the Spark, but what do the armies of faceless mooks matter?

"Anyways." The hillbilly continues, stuffing the tickets back into the horrible morass that is his facial hair, "I'm guessin' ya wash out them Funtelligence cores after a couple thousand cycles? They ain' never gonna ferment that way. Ma meemaw always said that fermentation was the key ta a fine functionin' AI. Or maybe that was moonshine. Ah well."


On another note, could you please stop spaghetti posting?
 
Last edited:
Ludivine being crazy does not change anything. She still is one of the world's most skilled scientists and knows everything about everything. Oh don't get me wrong, she isn't Wendy Wower. But this problem you have noticed wouldn't manifest from an 18 point difference in stats between the two (58 vs 40). Ludivine is Omnidisciplinary for a very good reason. And that being is that she simply is that good.

Do you at least understand why I'm willing to trust Ludivine and Janus on this?
I do understand, though I personally disagree in this instance.
Sinatron is the stuff that Lee is proposing that we prevent from happening. That is what he wants to control. There is no way to control Tron programs even if he was aware of them- which he won't be for a very long time given our lack of progress on the SHIVA laser.
"I've actually got my own suggestions for how we can get things into shape around here, while recent innovations have been impressive, I can't help but notice a few opportunities for them to snowball out of control." Janus continues.
"And why shouldn't I be worried? Before we get into any sort of philosophical conundrums about 'rights' or any of that, why don't we head it off ahead of time? We know the stimuli that lead to the development, or at least the possibility of full sapience, so I'm proposing that we develop a system to prevent that from ever happening in the first place. No sapience, no hypothetical rights violations. It's as simple as that, and as a bonus it ensures the programs will never be smart enough to rebel."
The two concerns Janus brings up are A: The AI going out of control/being smart enough to rebel and B: generally avoiding the philosophical mire of if sapient programs have rights.
I do not understand the conflation of Tron programs and artificial intelligence. One of them is something that everyone knows about, something that everyone would know we would be responsible for, and something we can interfere with without causing the collapse of modern society. The other is something that maybe a dozen people know about, something that falls completely out out of the jurisdiction of everyone involved, and any attempts to deal with it would cause the collapse of modern society.
I'm conflating them because Tron Programs are artificial intelligences, regardless of whether Doof or anyone else knows about them. And designing a system to prevent programs from anthropomorphizing would require testing them for anthropomorphization. If the test is effective, it would lead to the discovery that almost all programs are anthropomorphic and possessing of free will.
I am not just talking about from the mechanics perspective, I'm talking about from a writing perspective as well. If the action was impossible, why the hell would they bother having Janus and Ludivine debate about it? That's not good writing. Are you aware of the maxim of Chekhov's gun? If what you propose is the truth, it would have been a much better use of narrative space to have the two argue about something else.
Because it gives insight into their characters and values, and attempting the action would be very useful even if not in the way he's expecting. Attempting to produce a AI neutering system or taking a stance against such a course of action would still be firing the gun, even if the end result is coming to the realization that sapient programs are omnipresent.
If nothing else it is always going to be easier to break something than it is to make it. If I had to guess, it's probably just be resetting the AI every so often before any progress can be made, as McGucket said. Which makes this three people who seem to think it's possible to keep sapience from arising (specifically McGucket was referring to parsing emotions, but the context implies sapience is a part of that). Well, I suppose there are also all the numberless AI scientists who might be annoyed at the claim that it is so easy to create sapient AI after smashing their heads against the wall until Wendy Wower found the Spark, but what do the armies of faceless mooks matter?
Eh, McGucket may be right but doing that would probably still result in philosophical conundrums, and having to wipe all DEI technology periodically would not only likely result in annoying drops in performance, but would also only work on our tech meaning everyone else making videogames or vaguely anthropomorphizable programs are still spawning unchecked AI at random.
On another note, could you please stop spaghetti posting?
I cannot as I don't know what that term means.

All of this is kind of moot anyway since there's no way we're going to crush Wendy's dreams like that, or abandon the raw utility of friendly digital life forms.
 
Last edited:
Back
Top