So basically

Janus is anti-AI and regularly butting heads with Ludivine
Everyone agrees that the Hats are a problem, but don't agree what to do about it
Coyote, Janus, and Mal want to hostile takeover, Goofy and Mirage are for shoring up, Ludivine is in between the two.
 
That was fun.

It's neat seeing the various personalities bounce against each other.

I wouldn't be opposed to a council update each turn with them reacting to events.
 
I'm still not sure where that bowl of mints went but it could come back at any time!
I know you're leaning on the fourth wall but how do accidentally bulk order mints?
I'd have to get like, fifty more seats in there for all his unspecified number of ninja.
But what about the lobby ninja?
Mirage looks at you for a moment. "...Friends?"

"Yeah, um." You rub the back of your head. "Recent events have informed me that I may not have as many… actual friends in my life, as I thought I did. Should've known, really. So I'm trying to be more intentional. About. That."
Heinz needs to do group activities.
"And I'm telling you, if you vant transhumanism, den we already have ducks!"

"The small sample size of ducks in the world means that they are an outlier and should be excluded from our calculations! It's simple statistics!"
No Janus, that line of thinking leads to eugenics. We are not doing eugenics.
"So." Goofy asks cheerily. "What're we gonna do today?"
He said the thing! YEAH!
 
So basically

Janus is anti-AI and regularly butting heads with Ludivine
Everyone agrees that the Hats are a problem, but don't agree what to do about it
Coyote, Janus, and Mal want to hostile takeover, Goofy and Mirage are for shoring up, Ludivine is in between the two.
Janus wants to poach talent while they collapse in on themselves, seeing a hostile takeover as non-viable.
 
I figured I'd make a list of the preferences that our Councilors displayed in the most recent Interlude. Keep in mind that Ludivine's objections and preferences are random.

Wile E. Coyote: let AI be sapient, not up for propping them up, sabotage InventCo hat factories
Goofy: support Olympia
Janus Lee: Do genetics, get the corporation in shape, develop a block to keep AI from gaining sapience, actively against propping Olympia, poach talent from Olympia, sabotaging InventCo is to overt
Ludivine von Drake: focus on ducks for transhumanism, let AI be sapient
Mirage: develop a block to keep AI from gaining sapience, support Olympia, interrogate Goob
Malifishmertz: hostile takeover Olympia, sabotage InventCo hat factories
 
Last edited:
Well, it's definitely an interesting group we've gathered and I can definitely understand the points that have been raised; it's nice to have some kind of idea about might be considered import issues in-universe, which can help with planning

Doof's interactions were also pretty interesting; at least Goofy was friendly

I think this was the last interlude for this turn so next up comes the overhaul and then seeing how the various changes this turn have affected our options
 
"And why shouldn't I be worried? Before we get into any sort of philosophical conundrums about 'rights' or any of that, why don't we head it off ahead of time? We know the stimuli that lead to the development, or at least the possibility of full sapience, so I'm proposing that we develop a system to prevent that from ever happening in the first place. No sapience, no hypothetical rights violations. It's as simple as that, and as a bonus it ensures the programs will never be smart enough to rebel."

"That's quite sensible." Mirage comments.
No it's not. It's positing that we can come up with a way to make humans not pack-bond with with every creature, object or abstract concept we regularly interact with.
 
The stimulus is just "a human anthropomorphizes it" though. Stopping the process would require stopping humans from anthropomorphizing things, which is what we do.
I feel like Janus knows more about the subject than you. Given that he's confident enough to propose something like that. Also Ludivine hadn't brought up the fact that it was impossible, which you seem to be arguing for.

This is something you seem to not taken into account, but sapient AI are very rare. Outside of the Tempest Domain, I would say there is a fair chance we have seen 25-50% of all sapient AI on Earth over the course of the quest. The system needs to be advanced enough to sustain the anthropomorphization. It must be noted that every computer isn't sapient in DoofQuest.
 
I feel like Janus knows more about the subject than you. Given that he's confident enough to propose something like that. Also Ludivine hadn't brought up the fact that it was impossible, which you seem to be arguing for.
Possibly, but he's a geneticist and vehicle engineer, not a computer scientist, the entire concept of the Spark is incredibly new science and he's been known to commit to serious actions without considering consequences or viability. I don't really have a case for Ludivine other than that she's unpredictable at the best of times.
This is something you seem to not taken into account, but sapient AI are very rare. Outside of the Tempest Domain, I would say there is a fair chance we have seen 25-50% of all sapient AI on Earth over the course of the quest. The system needs to be advanced enough to sustain the anthropomorphization. It must be noted that every computer isn't sapient in DoofQuest.
...Tron/Wreck It Ralph. And the goal isn't just to prevent fully sapient programs, it's to prevent pseudo-sapient programs that blur the line enough for ethical concerns to be brought up too.
 
Last edited:
Possibly, but he's a geneticist and vehicle engineer, not a computer scientist, the entire concept of the Spark is incredibly new science and he's been known to commit to serious actions without considering consequences or viability. I don't really have a case for Ludivine other than that she's unpredictable at the best of times.

...Tron/Wreck It Ralph. And the goal isn't just to prevent fully sapient programs, it's to prevent pseudo-sapient programs that blur the line enough for ethical concerns to be brought up too.
Yes, and he still would have a better idea of the practicality than you. From an out of character perspective the QMs wouldn't have had that scene if they didn't plan for there to be an action for it in the coming up turn. I would find the idea to be rediculous if they made it harder to stop AI from becoming sapient than to, say, develop sapient military AI.

Tron and Wreck It Ralph are totally out of our concern. We do not even know they exist. Janus is referring more about Sinatron cars rather than programs. That is something I see no problems with stopping before hand.
 
Yes, and he still would have a better idea of the practicality than you. From an out of character perspective the QMs wouldn't have had that scene if they didn't plan for there to be an action for it in the coming up turn. I would find the idea to be rediculous if they made it harder to stop AI from becoming sapient than to, say, develop sapient military AI.
Campaigning for or researching Flubber is an option, that doesn't make it viable.
Tron and Wreck It Ralph are totally out of our concern. We do not even know they exist. Janus is referring more about Sinatron cars rather than programs. That is something I see no problems with stopping before hand.
Alan knows they exist, and from an OOC perspective, my point was that if an 8-bit racing game and an actuarial program, both developed in 1982, could produce sapient (or at least sentient) artificial intelligence that could actively hijack other technology, prevention is likely infeasible, at least without crippling our tech to the point of being unusable. The genie is not only out of the bottle, but has been for decades. The only thing Wendy's innovated is our understanding of genie growth and reproduction. So to speak.
 
Doofquest will be going on temporary hiatus until May 28, 2022! Or so. During that time we will be working on a major overhaul of mechanics. We won't be going radio silent or anything, but our updates will all be making edits to previous posts or mechanics. Our hope is to fix some issues that have been slowly building up over time, as well as introduce a bunch
Coming soon, you can expect:
  • Updated Mechanics
    • Reworked Rewards descriptions to be more clear about what actions give you
    • More details at the start of each turn post; Current Income, Items, Hero Unit issues (goblin fox, hungry for science, etc)
    • Crit mechanics are going to be changed to solve the 'Crits past DC 150 are near impossible' and 'Bare failures are better than normal successes' issues
    • Clear categories for which potential hires are likely to accept an offer- and which aren't
    • Expanded preferences (including actions a hero unit will NOT do under any circumstances, if you know them)
    • Updated Chat With Bossman so it can provide more benefits
    • Reduced focus on Write-Ins
  • New Mechanics
    • New Industries mechanic to represent the various things DEI actually does.
    • Renaming Income so there's a difference in name between how much money you have and how much you make. You'll likely have more income in general, but also more things you need to spend it on
    • Councilors now provide their opinion on (almost) all actions you can pursue! Particularly disliked actions can be locked
    • Self-rival reports, courtesy of Olympia Corp
    • Opinion Tracker now permanently available
    • Contractors system for all those weird hero units that don't quite work for you
  • Intro Rewrite
    • Updated Intro post for better ✨optics
    • Updated Mechanics posts to include all these new mechanics and clarify some things
    • Updated Lore posts to provide a better introduction to the setting for all those weirdos who don't spend their every waking moment watching Disney cartoons from the mid 2000s
      • Seriously it's really sparse and pretty useless, we haven't updated it since turn 1
  • Ara needs to review all the omake's without an inactive omake doc
    • Pray for me

We're going to be doing a lot of revisions, so… expect a lot of things to change if you still want to do planning and so on. You might want to wait a bit. Or don't. We're not your dad!
 
Last edited:
Campaigning for or researching Flubber is an option, that doesn't make it viable.

Alan knows they exist, and from an OOC perspective, my point was that if an 8-bit racing game and an actuarial program, both developed in 1982, could produce sapient (or at least sentient) artificial intelligence that could actively hijack other technology, prevention is likely infeasible, at least without crippling our tech to the point of being unusable. The genie is not only out of the bottle, but has been for decades. The only thing Wendy's innovated is our understanding of genie growth and reproduction. So to speak.
This is a rather impressive strawman. Janus Lee providing an option is not comparable to flubber of all things. He is not going to provide actions that are, and I'm quoting his Councilor description here, "frivolous, a waste of time, or blatantly self-destructive." You are proposing that he is.

Without any evidence whatsoever to back it up.

As I said, they are completely irrelevant to this discussion. The Council is not making decisions with them in mind. Even if it was, they are still irrelevant to what we are talking about. Janus wants to keep AI cars or AI cash registers from gaining sapience. Tron programs are not that.
 
Last edited:
Crit mechanics are going to be changed to solve the 'Crits past DC 150 are near impossible' and 'Bare failures are better than normal ones' issues
Isn't that how it's supposed to work? That getting a bare failure is preferable to getting a usual failure?

Anyways, thank you for the information on the upcoming reworks. Those sound helpful for getting a handle on the many things Doofania's involved in. I would like to ask a little more about the "industries" mechanics. DEI is involved in a lot of sectors, but for most of then we only do one or two actions. What will the broad strokes of this mechanic be, how will we interact with industries?
 
Updated Chat With Bossman so it can provide more benefits
Heh. I was literally about to suggest changing Chat with Bossman to be more narrative and personality based than something influenced by dice rolls so we would be pushed to do chats with more people without the results being luck based.

It seems you guys already have some ideas for it. Can't wait to see what you come up with.
 
I feel like Janus knows more about the subject than you. Given that he's confident enough to propose something like that.
In this case, the actual problem isn't "AI in general." It's Wower's recent discovery that anthropomorphizing a system tends to cause it to attain sapience.

This makes it quite challenging for us to straddle the line between "smart enough to be useful" and "smart enough to develop a personality." Because there is no line, it's blurry.

A Palm Pilot is unlikely to attain sapience because it just doesn't have the necessary hardware performance. And a super-duper-ultra computer no one anthropomorphizes may not get there.

But in the intermediate range where we have, for example, Norm? In there, there's no clear bright line we can design into our systems to prevent them from attaining sapience if someone anthropomorphizes and interacts with them enough.

As to why Janus and Ludivine haven't said so... Well, bluntly, Ludivine knows a lot but this is a recent discovery about AI theory that even she didn't know until a few months ago, and she probably still thinks about AI mechanistically, plus she's got "not invented here" syndrome and is probably a bit too arrogant to fully process the significance of Wower's breakthrough just yet. And Janus is even more arrogant and even more likely to think of intelligent beings (including AIs) as clay for him to reshape at will, along with being somewhat less naturally gifted than Ludivine.

They could just be wrong, y'know.

Crit mechanics are going to be changed to solve the 'Crits past DC 150 are near impossible' and 'Bare failures are better than normal ones' issues
@Made in Heaven ... Uh... why is "bare failures are better than normal ones" a problem? Wouldn't it seem reasonable and logical for a failure that rolls close to the DC to be worse than one that rolls far below the DC?

Why is this a problem you need to fix? Am I missing something here?
 
Back
Top