Do you think the cover is cool enough that I should add it to future other animal posts?
Or just considered that this omake was the cover and the first page and that the following omakes will be the next pages and therefore no covers?
The cover is way cool IMO, and deserves to be shown again. 💚
 
So, I ran into this quest yesterday (I think) and might have binge-read all the thread-marked content so far (which is all great) ... but clearly there's at least an order of magnitude more discussion so I'm hardly caught up 😅

Since I've never quested before, I have a bunch of really basic questions like
  • may I join in on the fun here?
  • is there some "how to quest" you'd recommend I read? (if only because it's really reassuring to have (some of) the expectations & "social contract" of the space written down, rather than going in blind)
  • are all the "canon" write-ins threadmarked?
  • is there someplace I can go read about the thread's plans on various topics, short of trying to read through the whole thread?
 
So, I ran into this quest yesterday (I think) and might have binge-read all the thread-marked content so far (which is all great) ... but clearly there's at least an order of magnitude more discussion so I'm hardly caught up 😅

Since I've never quested before, I have a bunch of really basic questions like
  • may I join in on the fun here?
  • is there some "how to quest" you'd recommend I read? (if only because it's really reassuring to have (some of) the expectations & "social contract" of the space written down, rather than going in blind)
  • are all the "canon" write-ins threadmarked?
  • is there someplace I can go read about the thread's plans on various topics, short of trying to read through the whole thread?
1. All who do not come with hostile intentions are welcome to join this glorious journey on Guangchou's path to achieve Fully Automated Luxurious Gay Space Communism.
2. Here. But in short: Read Update ---> Ask Questions/For Clarification ---> Formulate Responses/Actions/Plans ---> Discuss With Thread ---> Vote Upon Said Things ---> Read New Update And Repeat.
And the bog-standard be civil/do not insult/stay hella communist gay that applies to online discussions.
3. All in the Threadmark section are canon, Sidestories are canon or semi-canon unless specified as non-canon or blatantly out of universe or set in the future.
4. The Discord could offer you some insight, I posted the political map for the next turn there, and people (read: Planning Committe(read: Cyberfemme)) should be willing to give you a rundown on what is planned.
 
Yeah, it's easiest if you just ask questions here about whatever you're curious about.

More discussion in the thread is nice for putting it on the front page more often where more people can find it.
 
1. All who do not come with hostile intentions are welcome to join this glorious journey on Guangchou's path to achieve Fully Automated Luxurious Gay Space Communism.
Is there path we'd rather tread anyway? :wink2:

4. The Discord could offer you some insight, I posted the political map for the next turn there, and people (read: Planning Committe(read: Cyberfemme)) should be willing to give you a rundown on what is planned.
Ohno. I guess I'll need to recover my account (or make a new one) and somehow figure out how to use that platform with frying the AD(H)D-brain 😅
 
So, I ran into this quest yesterday (I think) and might have binge-read all the thread-marked content so far (which is all great) ... but clearly there's at least an order of magnitude more discussion so I'm hardly caught up 😅

Since I've never quested before, I have a bunch of really basic questions like
  • may I join in on the fun here?
  • is there some "how to quest" you'd recommend I read? (if only because it's really reassuring to have (some of) the expectations & "social contract" of the space written down, rather than going in blind)
  • are all the "canon" write-ins threadmarked?
  • is there someplace I can go read about the thread's plans on various topics, short of trying to read through the whole thread?
Welcome to the most glorious thread of the wonderful quest for the rise of a socialist nation through mad engineering, mass queerness and the incomprehension that something is impossible (WMGTWQRSNTMEMQISI for short),i wish you to have fun.

Don't hesitate to ask if you have problem with how make a turn plan,it was a little tricky for me and i m still notcertain that 16 is the exact maximum of action by plan.

QM: Should I change anything in the progress report I wrote?, I wasn't at my best when I did it so it wouldn't be a surprise
 
So, in no particular order:
  1. I've been wondering every since Guangchou got geothermal power: are there already heat distribution networks in place, both for (relatively) low-heat industrial processes, and heating the arcologies People's Most Pleasing Large-Scale Energy Efficient Housing?
    Are things set up so it's easy to contribute process heat into the system? (At the logical extreme, local cooling loops can both collect waste heat from various systems to use more productively, while also preventing it from making everything around hotter; likely a consideration given the Guang climate)
    This is probably to become even more relevant with nuclear teakettles power plants, since they run into the same thermodynamic limits on efficiency as all other heat engines, while putting out orders of magnitude more pawer.

  2. The political map made me realise neither China nor the USSR actually are in CyPac... which I guess makes sense, for it to be viewed as (mostly?) orthogonal to the "Western"/"Eastern" blocs. What's their current relationship to CyPac, or Guang computing industry?
    If I'm not mistaken, @yeastmobile's "It's pronounced l-tog-mers" is non-canon (as Jungmin didn't personally go to the Lupus conference?) so the USSR has yet to request Guang expertise in cyberization & computer-assisted planning.

  3. @cryptoam's SCL protocol seems to cover confidentiality & integrity of the communication, but so far I've not seen anything about authentication (beyond "the other side's pubkey is known") and authorization; would I be welcome to write something?
    I'm having ideas, involving object-capabilities (though I'll need to think up an appropriately-Guang name for them) and some anarchist friends who somehow survived dad's purges and ended up in Guang computing research.
I'm also having a ton of tech musings but I guess that would be better as a separate post 😅
 
So, in no particular order:
1. No, HDNs exist only in some villages.
2. USSR is kinda annoyed/smug at you/the West since CyPac is a massive win, even as merely a propaganda one. China is thankful for aid, but is currently busy with some minor shit. Both of zheir relationship is: "Want some, but do not get much as most is used internally."
3. If you wish, I am ot a cop. I can't stop you from writing! :V
 
1. So the do exist but are uncommon(?) and we totally should could systematize their use? Exciting~

2. What do you mean with "do not get much" ? As in, are unaware of how much of a force multiplier it is, or that they haven't deployed it much because internal troubles is taking priority?

3. Oh, I was only worried about stepping on @cryptoam's feet / conflicting with stuff they plan to write ^^'
I guess it would have been clearer to ask whether we need to coordinate :3
 
Last edited:
1984 - Radar, Sonar, Energy, and Fuel Related Effortpost
Guang-wide radar integration
Thanks to @CyberFemme's fever dreams the Airy's and Navy's modernization, we have three core radar concepts:
Now, what if we made TMGDPRG,LoAN,DoH,PoE into a giant AWAC platform high-power ground-based radar transmitters? There are a fair few ways we could leverage it:
  • immediate & boring:
    • use as the illumination source for Anata fighters and possibly (some) ship-based radars
      Pros:
      • a lot less vulnerable than Tao planes,
      • don't need to maintain an AWACS patrol over/around Guangchou, replacing fuel use (that we have ~always been short on) with geothermal or nuclear power
    • also build ground-based receivers (still in bistatic configuration)
      Pros:
      • a lot easier to hide than planes, can rely on ground-based telco infrastructure to fronthaul the data back to C&C
      • annoy Taiwan to no end once they realise Guangchou is effectively their shadow Air Traffic Control
      • segues nicely into the next thing
    • Cons: seems costly for what we get out of it, if we stop developing it at that point
  • our madengineers will definitely think of it
    • develop multistatic radar, ground-based receiver network gets massive improvements in radar resolution & sensitivity
    • modulate (encrypted) data into the radar broadcasts, e.g. position of ships/planes/etc. in the area, area-wide announcements or orders, etc.
      fill-in the blanks with random data, to prevent traffic analysis (and possibly prevent foreign analysts from deducing this is modulated data and not random modulation as an anti-spoofing measure or such
    • Pro: shouldn't require hardware changes, "just" more maths and new/improved soffware
  • ohohoh steeples fingers all according to keikThe 5-Year Plan
    • use the broadcasts for positioning and time & frequency transfer: similar uses & benefits as GPS/GLONASS (which won't be operational until 1993 OTL) but with ground-based transmitters and coverage is limited to the Guang radar's range
      fair warning: time metrology (incl. time & frequency transfer) is a special interest of mine
      • would likely need precise (and precisely-synchronised) time sources at the transmit stations
        ... but we may as well provide precise time through the whole Guang telco network
        • we have vertically-integrated telco and electronics, we can solve that very well and cheaply... especially if we deploy it as part of an otherwise-needed telco upgrade
        • for the reference sources, atomic clocks have been a thing since the 50s, we could import the first ones and develop native ones later
        • there's basically no end to the scientific and industrial applications that require precise timing, we may as well solve the problem for everything that's connected to Guanet (not to be confused with guano)
      • civilian applications shouldn't be an issue, as the receivers don't need sensitive information, just the broadcast frequency/bandwidth and location of each station... which foreign ELINT would trivially acquire anyhow
        • navigation for ships, trucks, etc.
        • TRAAAAAAAAAAAAIIIINNS! 😻
          Dragon Rail can get computerized train control with train-side positioning equipment:
          • retrofuturistic railnet operation center with live overview of the whole network (incl. train positions etc.)
          • train integrity monitoring: detect (partial) derailments, detached wagons, etc.
          • all the goodies from modern train protection systems: support for high-speed trains, in-cab signalling, higher throughput & network utilisation thanks to moving blocks, etc.
          • 2 receivers per train (which are solid-state electronics not exposed to the weather) are probably cheaper and less fussy to have, than track-side equipement over the whole network
    • that unlocks multistatic radar for mobile platforms... assuming we have high-throughput digital radio
      now, every single Guang ship and radar-equipped aircraft (potentially including in-flight missiles) can be part of The People's Radar Network
      • communication and AESA radar can be integrated together, gaining the ability to transmit in arbitrary directions
        Pro: will annoy US SIGINT agencies to no end, as it basically removes their ablility to use radiolocation, traffic analysis, etc. to know about what our military is doing
  • Overall pros:
    • keep all Guang safe(r) from airborn threats like nasty Japanese bombers
    • advance towards the vision of integrated, communicating military systems that started with the ITs
    • begins quick/easy to develop,
    • ends with an integrated platform doing navigation, communication, and EWar/ECM/ECCM
      • faster developments of new platforms: only varying "how much computer and antenna" based on the role the plane/ship/etc. is being designed for
      • keep old platforms current: they can get new firmware and/or new electronics as the common platform gets more advanced, rather than needing to invent three new systems from scratch to refurb a given vehicle model with
    • massive utility in future developments, be it for science, industry, logistics, etc.
  • Cons
    • the ground-based transmitters will be complete power hogs, although Guangchou is about to get
      • nuclear power (if we can secure a stable supply of reactor-grade material)
      • (OwO) very high temperature superconductors (see "(Now Is The) Time of Monsters" Nuclear Attack Submarine Type 9)
        • but refurbishing giant transmission systems with superconducting antennas cooled to -30°C sounds like a pain
          we may want to go with superconductors from the start, meaning we'll depend on successfully starting a factory
    • this is flexing hard on having completely-WTF electronics for the time (and nearly-room-temp. superconductors!)
      not necessarily a bad thing, but I'd expect intelligence ops targetting us to get both more numerous and more sophisticated... though that seems unavoidable to start with, given the large-scale cyberization etc.

The radar network's name should obviously be wordy and over-the-horizontop but I think a nice thematic nickname would be something like paddie hat (but in Guang) ; plus, the Austrian air defence radar is about to become operational (1983 OTL) and it's called Goldhaube: "golden hat."


Fixed-installation sonar
Now, like all good Guangs we like being safe from air raids etc. but we should also think of the poor sods stuck in tin cans attack submarines for months at a time... and the paddie hat might inspire some enterprising engineers to design an array of passive sonars, and use similar techniques to mush all that data together
  • active sonar would illuminate our subs
  • what sensors to use?
    • buoys might work, and can backlink over radio, but aren't very discreet
    • underwater construction is expensive, and we'd need to lay submarine cables for data and power
    • ... but we can get away with onlycable
      • optical fibers respond to mechanical stress in ways that are "remotely" detectable (from the end of the fiber, rather than the stress point)
        • Guang scientists may already have noticed, if we are using (or looking into) optical communication and researched how optical fiber might react during earthquakes etc.
        • we could well be using optical fibers: the first endoscope and first fiber bundle to transport whole images across corners were invented by Heinrich Lamm, a German Jew who fled to the US OTL
          unrelatedly, the first data transmission system over fiber was done in the 60s
          plus, optical coms will keep working in the event of a nuclear attack (though TBH, that's already a Bad End)
        • those responses are known OTL to be usable as accoustic sensors (incl. for hunting subs, IIRC)
      • that's equivalent to a microphone at every pointalong the fiber
        • spatial resolution (the spacing between notional microphones) is determined by the receiver's time resolution / sampling frequency
        • time resolution (how often do we sample signal from that set of notional microphones) depends on the tech used
          • if we send laser pulses and wait for the reflected light, that's the pulse frequency, which is limited by the fiber's length: we need to wait L/c for the pulse to clear the fiber, so that's at most 3km-long fibers for sampling at 100kHz... or 30km at 10kHz
            I seem to recall sonar mostly cares about low-frequency sound, so it wouldn't be a terrible limitation, but I'm no expert at all on the topic
          • if we send a chirp (sweeping in frequency/wavelength) that's a non-issue... but I have no clue how early-80s Guangchou would make frequency-modulated lasers 🙀
  • where to deploy?
    • the straights of Guangchou and Osumi
      • now I'm imagining submersible ITs being deployed to covertly lay cable in Taiwan's territorial waters, while the ship remains outside
    • radiating from the Guang coastline
    • in the future (and once China recovers) maybe run fibers through the East China sea?
      plus, that could bring Guannet to China
  • Pros
    • if we build a ship (and expertise!) for laying cables, we'll be ready to run fiber lines to China, the USSR, our CyPac allies in the vicinity... or even connect the whole world, eventually
      • giving us opportunities to roll up fiber to countries further away, all over their coast, etc.
        • whichever fibers are currently unused will nicely do for passive sonar... or maybe even in-use fibers, as long the sonar system uses other wavelengths than for data
        • the sonar system can be disguised / explained away as a continuous diagnostics system for the fiber
          it's even true: a break in the fiber should cause the laser pulse (or chirp) to be reflected in near-totality, and the location of the break can be determined from the time delay (between pulse/chirp and receiving the reflection)
      • pre-empting some notable Western efforts at making most global Guanet traffic flow over their territory (or through fiber infrastructure they own) and snooping on it
    • continuously collect intel on sub and ship movements, both military and civilian
    • some of our Communist Comrades will be keenly interested (at least if they get their shit together)
  • Cons
    • unlike airborne threats, sub warfare isn't a deeply-rooted concern in Guang culture, it will be harder to justify the effort
      ... though this proposal is nowhere near as ambitious as paddie hat
    • people will be more suspicious of Guang laying fiber around the world, if it ever becomes known they can be used for sonar

Reducing fuel use, switching to alternatives
A bit more thinking-aloud than a concrete proposal, but those are problems that will require a wide array of solutions.
  • Pros
    • Guangchou's fuel supply has been inadequate since the beginning
      or at least since the statistics bureau has been giving us bad news
    • improved public heath and quality-of-life, especially in areas that are heavily-industrialised or densely populated
    • positive ecological impact, possibly a corresponding Reputation bonus (since negative eco. impact had a malus)
    • reduced dependency (and expenses) on foreign hydrocarbons
  • Cons
    • have a decent reason to replace most of the skateboard frames with a new model
      ... but at least we might be able to keep whatever cabin & equipment was on top
    • need to design new engines & fuel cells
    • stop putting the IT's magic fuel in everything
Getting into it sector by sector:
  • Heavy-industrial heating
    • needs high temperatures
      > 300°C, often > 800°C, 1 100 and 1 400°C for steel blast furnaces and cement kilns resp.
    • Rejected options
      • resistive electrical heating: terribly inefficient
        thermal power plants have max. ~50% efficiency, Guangchou doesn't have other energy sources (yet?)
      • keep burning fossil fuel, do post-combustion carbon capture: are we going to reinvent CO₂ credits next? 🤢
        (also, doesn't solve the dependency on fuel imports)
    • 👍 heat pumps to move process heat from output
      basically, cooling product coming out of the furnace/kiln/whatever to heat up what's going in
      super effective since the heat source is very hot
    • radiative electrical heating (induction or microwave)
      very effective at selectively heating things, but usefulness is very contextual
    • SPICY ROCKS NUCLEAR POWER
      • does indeed provide a lot of (thermal) power very efficiently
      • can be colocated with electrical power production, so any excess goes into the national grid
      • security & safety concerns, need a lot of supporting infrastructure to mine, extract, and enrich fissile materials
        (unless we are 100% reliant on importing those from China or the USSR)
    • 👍 geothermal
      that's the "Cauldrons" megaproject
    • burning non-fossil fuels
      • doesn't require much changes, as long as the fuel can provide the necessary temp.
      • no biomass / biofuel
        • would use massive amounts of farmland, which is scarce in Guangchou and needed for feeding people
        • would still be a public health and QoL issue
  • Residential and light-industrial heating/cooling: replace with local heat-distribution networks and co-generation
    • We apparently have the technology, but it's only deployed in "some villages"
    • Make sure all standard-design buildings (both housing, industrial, and otherwise) have affordances for HDNs
      • easy if there already is central heating/cooling
      • should Guang appliances (ovens, fridges, etc.) now be designed to dump the heat they produce into the cold side of an HDN?
        I haven't done the thermodynamics on that, but it seems like a net win for both energy efficiency and quality-of-life (doesn't heat up the place in summer, no fan noise, etc.)
  • Power generation
    • IIRC we haven't built hydrocarbon-fueled power plants since we got geothermal power?
    • Ensure sufficient renewable & nuclear production capacity, so fuel is only used for peaker plants
    • add support for centrally-managed load management to everything where that makes sense
      I guess we are getting GoT (Guannet of Things) 😹
      tipping my hat off to @CyberFemme building that into the standard housing's water tanks from the start
    • Build up energy storage capacity to further reduce fuel consumption
      • Is the volcano tall-enough for pumped hydroelectric storage to make sense?
      • Chemical energy storage
        • Batteries suck (see below)
        • (reversible) fuel cells?
    • make everything more energy efficient
      more easily said than done
  • Transportation
    • are trains and trolleys already electrified?
    • EVs for road use?
      • main roads can be equiped with catenaries to provide motive power (and even recharge BEVs)
        Germany is trying out the concept OTL
      • batteries are pretty awful though
        • Lithium extraction is highly polluting and linked to inhumane treatment of native populations (even moreso than most metals)
        • does Guangchou even have native deposits?
        • inevitable degradation over time
      • fuel-cell EVs, then?
        • IDK what the state of fuel-cell tech would be in 80s Guangchou
        • need a non-fossil fuel
        • otherwise excellent
          • refuels as fast as a conventional vehicle (unlike the slow charging of a BEV)
          • unlike BEVs, higher-range FCEVs aren't a lot heavier / less efficient
            (assuming the fuel has high specific energy)
          • a lot more energy-efficient than internal combustion engines
I guess I foreshadowed it more than a bit, mentioning non-fossil fuels and how fuel cells are the best thing since sliced bre... pork buns, but I think we should look into native, renewable (di)hydrogen production on Guangshou because:
... fuck my laptop experienced an(other) unschedule shutdown and I lost all I wrote on that topic, so I'll write it again... later >_>'
  • it's a great fuel
    • much higher specific energy (energy per unit mass) than all other chemical fuels
      2.6 × that of natural gas (which is itself the highest amongst hydrocarbons)
    • energy density (per unit volume) is subpar... but that's only a concern in vehicular and aerospace applications
      for surface vehicles, studies found an H₂-FCEV is vastly better than a BEV in spite of that because
      • making a bigger fuel tank doesn't make the vehicle a lot heavier (because the tank and its contents are typically a small fraction of the vehicle's weight)
        that's not so true of batteries, since the whole energy storage system gets scaled up
      • vehicle mass apparently matters more than volume, when it comes to friction etc.
      • that may also apply to ships (at least those too smol to go nuclear)
    • very efficiently usable for both heat (in a condensing furnace) or electrical power (in a fuel cell)
      H₂ fuel cells (+ electrical motors) are already (OTL) more efficient than an internal combustion engine can ever be (at least without getting handwavium to make ICEs out of cheap very-high-temp. alloys ... and fuels that can create those temperatures, etc.)
    • no harmful emissions from fuel cells
      (technically, water is a greenhouse gas, but it goes back into the usual water cycle... at least when released on Earth's surface; dumping megafucktons of water in the high atmosphere might be more problematic 😅)
      • in the case of combustion, care is needed to avoid producing NOₓ, but that's true of just-about anything else?
  • we have a couple options for producing it natively
    • maybe Guangchou (or the contested islands OwO) has geologic hydrogen
      • unclear whether this is renewable, even if chemical processes produce it deep within the Earth's crust
        ... but it makes sense regardless, in terms of harm reduction and scale of the reserves
      • would provide a very easy leg-up towards near-zero-emissions grid power and transportation
    • co-generation of hydrogen in a nuclear plant
      • between half and all of the necessary energy can be thermal, bypassing the ~50% efficiency tax in converting the reactor's thermal power into electrical
        basically the "spicy rocks" solution for heavy-industrial process heat that I mentioned above, applied to producing H₂
      • add tanks and fuel cells, and we have highly-efficient energy storage for the electrical grid
        one more nail in the coffin of our hydrocarbon-fueled power plants
      • hydrolisis will also produce large amount of O₂ which is always a good thing to have
        • for life-support systems and medical use
        • in industrial processes
          • removing sulfur impurities and excess carbon when smelting steel
          • synthesis of ethylene glycol
          • water treatment, apparently?
          • cutting and welding metals
        • Wei apparently picked hydralox for Guangchou's future space program
        • I guess if we still have a significant surplus (likely) it would be a nice export?
Somewhat unrelatedly, I'd love to hear people's thoughts about H₂-powered jets and such... even though "weird fuel" would make logistics suck for deployment away from Guangchou, especially since all OTL attempts used liquid H₂ to get higher energy density (per volume) :
  • Lockheed Skunkworks' Project Suntan (what if spy plane was space plane?)
  • Tupolev Tu-155
  • Reaction Engines' current(?) SABRE hypersonic air-breathing engine concept, meant for
    • LAPCAT A2 (meow!) concept for an hypersonic airliner (WTF?)
    • Skylon SSTO spaceplane

Sorry that turned into a gigantic braindump, but what do you think (about any of it) ?
 
Last edited:
@cryptoam's SCL protocol seems to cover confidentiality & integrity of the communication, but so far I've not seen anything about authentication (beyond "the other side's pubkey is known") and authorization; would I be welcome to write something?
I'm having ideas, involving object-capabilities (though I'll need to think up an appropriately-Guang name for them) and some anarchist friends who somehow survived dad's purges and ended up in Guang computing research.
Hi, it's great to see more people wanting to contribute. SCL is intended to be similar to TLS but designed in a manner that prevents the many problems that SSL and TLS had in their earlier protocol versions. SCL is simply an overlay protocol and the specifics of the actual authentication is up to the application that is using it. For example, the application may make use of PKI infrastructure to bind cryptographic public keys to the expected hosts. They can also include client + server public key based certification/authentication and even mix in a secret key in situations where that can be set up. Of course, just because the tunnel is authenticated(ie you are talking to the right server), it doesn't provide any specific guarantees about the peers. You are also welcome to add to user/peer authentication technologies as well.

I'm also working on a system similar to kerberos for shared secret key given a trusted backbone. Principle idea is that the two peers have their own access to the trusted backbone with their own secret key and have a known identifier. When a secret key is needed, the initiating party sends a request for a secret key for the identifier of the other party. They get back a package containing a key id and the key. The key id is then shared to the receiving peer can then use the key id to get the shared secret key from the backbone.

This scheme simplifies the key distribution to only needing to secure the backbone and giving each node a secret key to communicate to the backbone. Then all that is needed is the other node's id and we can handle key agreement without needing N*(N-1) keys for N nodes (only N keys instead).
 
Lol. Cybersecurity and cryptography are my passions and yeah, it can get complicated with the jargon.
Also am working on an omake and am going to post it relatively soon. Stay tuned folks.
 
Hey, look. SSL/TLS had no problems, and totally wasn't obsoleted early in the 90s.

If you'll excuse me, there's my bus, I must catch it.

Or, let's see how much of my networking classes I remember.

TLS (or Transport Layer Security) is a cryptographic protocol. It comes from SSL. I have to mention that SSL 1.0 was unpublished, and replaced with 2.0 in 1995. Then 3.0 in 96.. then TLS in 1999.. then etc.

The idea is it creates a stateful connection (i.e remembers the connection), that handshakes between two nodes. (Usually, serever and client.). The client presents what it can do, the server goes "OK, I pick this one. This is my ID."

The ID is issued by a trusted certificate authority*, the client goes "Ah, yes. I approve of this issuer**." It generates a random number with the public key, a key the server uses to ID itself. (It's part of a pair, for obvious reasons, the private key, which can be used for .. a lot of things.. is private.)

The server uses it's private key to decrypt the number, then uses it as a session key.

* The trusted certificate authority can be subverted; see: China, Russia and Comodo, and sometimes can be prone to social engineering attacks. (see: Eddy Nigg gaining control of mozilla.com's cert, etc.)
** Obviously, this means if someone has placed a fake CA on your client, then they can feed you a fake certificate.
 
I have to mention that SSL 1.0 was unpublished
This was partly because there were so many security issues with the first specification that they effectively shot it out the back and had to work on a new version. Secure protocol design was hard and selecting good cryptographic primitives was hard as well. Remember, this was the time when stuff like MD4 and MD5 were considered secure. We also had FEAL come out(which was successful attacked at the same time it was presented) which is nowadays used as an example cipher that can be easily attacked using modern techniques like linear and differential cryptoanalysis.

I'm actually having some problems in various areas of the design because some of the knowledge on securing protocols were only available after the time period we are in quest.

Also: Incoming omake in the next 5 mins everyone. Gotta finish the editing and look up some stuff in my previous omakes.
 
My classes actually don't focus a lot on the implementation mechanics of SSL/TLS or .. even IPSec/IKE which..

I get it. But having done various CTF stuff I wish we'd have done more of it.

But I'm going to a community college so I have to expect a bias towards doing things. :D
 
A proposal for better password hashing
Title: A proposal for better password hashing
A proposal for better password hashing:
Abstract:
This proposal seeks to explore enhanced password hash designs that seek to reduce the asymmetry between an attacker's effective computational power compared to a limited defending server that can only handle a limited amount of computational slowdown when computing password hashes. We explore the use of memory in the computation of password hashes and how including memory can help limit the asymmetry. This asymmetry can be explored in terms of the total amount needed to compute a single hash and the speed of memory access.

Introduction:
Passwords are a common security mechanism. They are used to both verify identity(due to the fact that passwords are a static shared secret) and as a shared secret by itself(i.e. used to derive encryption keys). Because of this fact, attackers are highly motivated to try and recover passwords during their attacks. Successful recovery of a valid password can allow for the compromise of resources such as accounts and sensitive data.

There are two main scenarios for attacking passwords. Either the attacker has to send queries to a system that is validating the password(typically referred to as an online attack) or the attacker has access to the information used to verify the password(typically referred to as an offline attack). Online attacks can be effectively detected and mitigated by the system. For example, the system can monitor each client's attempts at logging on. Should a client attempt to submit too many passwords at once to either a single or multiple accounts, it is likely that the client is attempting to discover passwords and action can be taken against this client. Offline attacks however do not run into this problem. The attacker is only limited by the difficulty of recovering a valid password from the authentication data.

There are multiple methods of attacks that are available depending on how the defending system handles passwords. In the simplest case, the password is left in clear text. In such a scenario, the password is easily recovered and only has to be extracted from the data. A defender may seek to mitigate this by encrypting the passwords. This is a promising start at defense, however there are problems. The authenticating party must somehow have the ability to decrypt or otherwise compare the submitted passwords to the encrypted data. This means that part of the authentication data must include the encryption key. In an offline attack scenario, we must assume that the attacker has access to such information. Therefore encryption proves to be a trivial barrier to circumvent(it is still a useful defense measure however for preventing easy access to authentication data) in an offline attack.

The defender may consider using hashing instead. When a password is registered, it is stored as the hashed value of the password itself. When authentication occurs, the submitted password is then hashed and then compared to the stored hash. If both match, it is likely that the submitted password is the original password and therefore the submitted password is valid.
This scheme however depends on the specifics of the implementation. Failure to implement it correctly allows for the attacker to easily breach the security of the system. For example, if the password is hashed using a hash that is not cryptographically secure, it is possible to construct colliding passwords to the hash. These passwords can be equally valid as the original password and would be equally usable.

A cryptographically secure hash by itself is not sufficient either. Due to the nature of the hash, it will always output the same hash for the same password. This means that if there are multiple passwords that are the same(for example users selecting common passwords), they will all share the same hash value. Detection of such common passwords render all of them vulnerable to simultaneous attack. Firstly, it can be conjectured that the password is a common one. Therefore, one can try the most likely candidates. Once a matching candidate is found, all the passwords with the same hash value are breached at once.

There is also the problem of precomputation attacks. The attacker may prepare a lookup table of the most likely passwords before attempting recovery. This renders the cost of the attack to simply looking up the hash in the precomputed table which is extremely cheap compared to having to perform guesses. The memory needed for this attack can also be reduced by using advanced methods such as rainbow tables, allowing for highly extensive precomputation attacks to be mounted cheaply and repeatedly.

The countermeasure here is to include a random static salt value in the evaluation of each password hash. This random salt value makes it so that each password corresponds to multiple possible hashes. Since each password stored in the authentication data has a different salt, it means that even if two users have the same password, their hashes will be different, preventing the simultaneous breach of many passwords. If the salt is sufficiently large enough, the size of the memory required for precomputation attacks can be well in excess of the capacity of the attacker to handle.

However, salts do not cause substantial computational burden when attempting to brute force a single password. In response to this, iterated hashing is used to enforce a minimum computational burden for each attempt at a guess. The defender can afford this cost because they do not seek to attempt to brute force passwords and only have to handle the limited amount of authentication queries. However, the attacker must continuously expend computational resources on each guessing attempt. This increases the computational cost of brute forcing. When the computational cost is appropriately tuned, the attacker will suffer sufficiently high costs such that their ability to recover passwords from the authentication data is minimized.

There is one wrinkle however. Brute force and guessing is an embarrassingly parallel problem. Each attempt is wholly independent of another. The only costs to parallelization is the initial set up of each parallel computation(i.e. specifying a range of passwords to try for each computation) and collecting the results. This means for an attacker with N parallel processors(i.e. machines or cores), they can gain a speed up of N times in their attacks. The only known method to help counteract this speed up is to increase the computational cost for each evaluation, factoring in this increased attacker power. This is not an effective mitigation because it causes a far higher cost to the defender for relatively limited security gain.

A point of contention:
In theoretical models of computation, we typically evaluate costs of an algorithm via the amount of operations needed. In extended models, we also include the costs associated with memory capacity and access. This proposal suggests the possibility of also increasing the memory costs of each evaluation of a password hash. This helps to increase the difficulty of brute forcing the password by forcing the attacker to also increase the memory for each parallel computation.

In order to analyze the cost increase, we will now analyze how this impacts attackers with access to different kinds of hardware and how memory requirements impacts speed.

The weakest attackers are those that have limited(typically single core) parallel computational power. These attackers are highly limited in terms of how much parallelization they can afford. For example, they may need to buy N machines in order to gain a speed up of N. This is not a very effective attack and therefore we will not consider this attacker further.

The second type of attackers are those with systems designed for parallel processing. For each machine that the attacker uses, they can run X possible evaluations at a time. However, many such machines are slower in terms of how long it takes for them to return output. We call this slow down S. The attacker may also have up to N parallel processing machines. The effective speedup can be calculated by (X*N)/S. For many scenarios, the slowdown factor S is can be negated by the increase of X. This increased computational power is further increased by the amount of machines dedicated to the attack effort. This attacker is highly dangerous and poses a threat to the defender.

Finally we consider the fact that well resourced attackers may specifically design ASICs to perform parallel evaluations of possible passwords in an attempt to maximize their gains from compromised authentication data. ASICs are specially designed circuits that are only able to perform a specific function. This specialization greatly improves their speed and efficiency in performing said function at the cost of not being generalizable. Attackers that have this computational capacity have a similar computation speedup as the second type but their value of X is far greater than typically anticipated. This poses an extreme threat to defenders.
Now we consider the costs of memory and how such costs impact the three types of attackers that we have explored. Memory is relatively expensive compared to pure logic and access to them increases the amount of time it takes to perform computation.

The first attacker type is not substantially affected by imposition of memory associated costs. The machines that they use are likely able to handle similar amounts of memory amounts and have similar memory latency as the defender and therefore are not substantially affected by these costs. Therefore, memory cost imposition has negligible effects on this attacker. However, this attacker is already limited in terms of the amount of computational speedup they can acquire and therefore the defender is at relative parity to them.

The second type of attacker however will start to run into problems. Highly parallel general purpose machines do not have much memory dedicated to each parallel execution and memory latency can be a problem in some designs. Depending on the minimum amount of memory needed and the stress put on memory latency in the evaluation of a single hash, the attacker may suffer substantial reductions in the amount of parrallel executions they can perform. In the worst case, they may be unable to use such machines at all and therefore lose all computational speedups that they would otherwise have.

The third type of attacker is far more affected. ASIC memory is not easy to integrate at high amounts. This means that past a certain critical point, the machine must rely on external memory. This imposes memory latency costs on the evaluation of each hash. At the same time, they may also run into bandwidth limitations depending on the amount of parallel evaluations they wish to do. This means that ASICs are far more constrained then the prior two attacker types and suffer more when memory costs are imposed.
Therefore we believe that memory cost imposition in password hashes are a powerful tool in reducing the asymmetry between attacker and defender computational power.

We further detail our estimates and discuss other details like time memory tradeoffs in our accompanying report to this proposal("Estimating the effectiveness of memory cost imposition in slow password hashing").

Call to action:
These theoretical results and estimates are great for defenders. However, these results are useless without algorithms that can perform such cost impositions. We propose two main ways to do so.
1-Use larger hashing operations that have higher memory costs:
An initial retrofit can be done by replacing the cryptographic hashes we are using with new or modified versions that have larger states and repeatedly perform access to multiple locations in memory during computation. We also propose the possibility of using far larger salts or using cryptographic key generation algorithms to create an expanded state that is then hashed back down during each iteration.
2-Explore methods to impose memory costs:
The first method is of limited utility over a longer time span. We are currently investigating the possibility of expensive hashing algorithms that incur an intentionally high amount of memory accesses. We are also exploring methods that force the use of large amounts of memory during the computation of hashes. This effort will require further resources but should result in resilient password hashing algorithms that will help improve the state of defenses.

We also request that password hash storage format standards start providing mechanisms to allow for the storage of memory cost parameters along side the hash. Such support has already been provided for salts and iteration parameters and should be easy to retrofit as needed.
AN: This is me trying to help advance the state of security in Guangchou. At this time, the state of the art was literally stuff like crypt(DES version) and LM hash. The functions used to hash passwords back then were not highly resilient to attack. Even weak stuff like PBKDF2 came out circa 2000s and that stuff should be considered generally inadequate(PBKDF2 can be easily parallelized and cracked even with high iteration counts).

In terms of when this is was created in quest, it was before SCL Snapshot 2. The "WEISOFT CREDENTIAL VERIFICATION HARDENING" thread in there was the result of what the proposal suggested. In quest, I think designers would probably come up and put into use something like bcrypt which came up in 1999. bcrypt has what we would call cache hardness. The speed of it depends in large part the ability to rapidly access a portion of memory. This portion of memory would fit in a cache but not in registers. This means that at minimum, cracking systems must have either enough fast access memory or perform a time memory tradeoff optimization to efficiently brute force bcrypt hashes. It is still usable in the modern day(but to be honest, please use Argon2id if you need a password hash or slow key derivation function).
 
Last edited:
I have an idea, a totally unhackable communication system!
Pneumatic tubes!:D

Just a joke there is no such thingas an totaly unhackable comunication system ( i think ?),even if it would be cool to have some of those (they are fun/cool) i m not sure if there is somewhere where they would be useful,do you know how where that would be possible ?

Title: A proposal for better password hashing
A proposal for better password hashing:
Abstract:
This proposal seeks to explore enhanced password hash designs that seek to reduce the asymmetry between an attacker's effective computational power compared to a limited defending server that can only handle a limited amount of computational slowdown when computing password hashes. We explore the use of memory in the computation of password hashes and how including memory can help limit the asymmetry. This asymmetry can be explored in terms of the total amount needed to compute a single hash and the speed of memory access.

Introduction:
Passwords are a common security mechanism. They are used to both verify identity(due to the fact that passwords are a static shared secret) and as a shared secret by itself(i.e. used to derive encryption keys). Because of this fact, attackers are highly motivated to try and recover passwords during their attacks. Successful recovery of a valid password can allow for the compromise of resources such as accounts and sensitive data.

There are two main scenarios for attacking passwords. Either the attacker has to send queries to a system that is validating the password(typically referred to as an online attack) or the attacker has access to the information used to verify the password(typically referred to as an offline attack). Online attacks can be effectively detected and mitigated by the system. For example, the system can monitor each client's attempts at logging on. Should a client attempt to submit too many passwords at once to either a single or multiple accounts, it is likely that the client is attempting to discover passwords and action can be taken against this client. Offline attacks however do not run into this problem. The attacker is only limited by the difficulty of recovering a valid password from the authentication data.

There are multiple methods of attacks that are available depending on how the defending system handles passwords. In the simplest case, the password is left in clear text. In such a scenario, the password is easily recovered and only has to be extracted from the data. A defender may seek to mitigate this by encrypting the passwords. This is a promising start at defense, however there are problems. The authenticating party must somehow have the ability to decrypt or otherwise compare the submitted passwords to the encrypted data. This means that part of the authentication data must include the encryption key. In an offline attack scenario, we must assume that the attacker has access to such information. Therefore encryption proves to be a trivial barrier to circumvent(it is still a useful defense measure however for preventing easy access to authentication data) in an offline attack.

The defender may consider using hashing instead. When a password is registered, it is stored as the hashed value of the password itself. When authentication occurs, the submitted password is then hashed and then compared to the stored hash. If both match, it is likely that the submitted password is the original password and therefore the submitted password is valid.
This scheme however depends on the specifics of the implementation. Failure to implement it correctly allows for the attacker to easily breach the security of the system. For example, if the password is hashed using a hash that is not cryptographically secure, it is possible to construct colliding passwords to the hash. These passwords can be equally valid as the original password and would be equally usable.

A cryptographically secure hash by itself is not sufficient either. Due to the nature of the hash, it will always output the same hash for the same password. This means that if there are multiple passwords that are the same(for example users selecting common passwords), they will all share the same hash value. Detection of such common passwords render all of them vulnerable to simultaneous attack. Firstly, it can be conjectured that the password is a common one. Therefore, one can try the most likely candidates. Once a matching candidate is found, all the passwords with the same hash value are breached at once.

There is also the problem of precomputation attacks. The attacker may prepare a lookup table of the most likely passwords before attempting recovery. This renders the cost of the attack to simply looking up the hash in the precomputed table which is extremely cheap compared to having to perform guesses. The memory needed for this attack can also be reduced by using advanced methods such as rainbow tables, allowing for highly extensive precomputation attacks to be mounted cheaply and repeatedly.

The countermeasure here is to include a random static salt value in the evaluation of each password hash. This random salt value makes it so that each password corresponds to multiple possible hashes. Since each password stored in the authentication data has a different salt, it means that even if two users have the same password, their hashes will be different, preventing the simultaneous breach of many passwords. If the salt is sufficiently large enough, the size of the memory required for precomputation attacks can be well in excess of the capacity of the attacker to handle.

However, salts do not cause substantial computational burden when attempting to brute force a single password. In response to this, iterated hashing is used to enforce a minimum computational burden for each attempt at a guess. The defender can afford this cost because they do not seek to attempt to brute force passwords and only have to handle the limited amount of authentication queries. However, the attacker must continuously expend computational resources on each guessing attempt. This increases the computational cost of brute forcing. When the computational cost is appropriately tuned, the attacker will suffer sufficiently high costs such that their ability to recover passwords from the authentication data is minimized.

There is one wrinkle however. Brute force and guessing is an embarrassingly parallel problem. Each attempt is wholly independent of another. The only costs to parallelization is the initial set up of each parallel computation(i.e. specifying a range of passwords to try for each computation) and collecting the results. This means for an attacker with N parallel processors(i.e. machines or cores), they can gain a speed up of N times in their attacks. The only known method to help counteract this speed up is to increase the computational cost for each evaluation, factoring in this increased attacker power. This is not an effective mitigation because it causes a far higher cost to the defender for relatively limited security gain.

A point of contention:
In theoretical models of computation, we typically evaluate costs of an algorithm via the amount of operations needed. In extended models, we also include the costs associated with memory capacity and access. This proposal suggests the possibility of also increasing the memory costs of each evaluation of a password hash. This helps to increase the difficulty of brute forcing the password by forcing the attacker to also increase the memory for each parallel computation.

In order to analyze the cost increase, we will now analyze how this impacts attackers with access to different kinds of hardware and how memory requirements impacts speed.

The weakest attackers are those that have limited(typically single core) parallel computational power. These attackers are highly limited in terms of how much parallelization they can afford. For example, they may need to buy N machines in order to gain a speed up of N. This is not a very effective attack and therefore we will not consider this attacker further.

The second type of attackers are those with systems designed for parallel processing. For each machine that the attacker uses, they can run X possible evaluations at a time. However, many such machines are slower in terms of how long it takes for them to return output. We call this slow down S. The attacker may also have up to N parallel processing machines. The effective speedup can be calculated by (X*N)/S. For many scenarios, the slowdown factor S is can be negated by the increase of X. This increased computational power is further increased by the amount of machines dedicated to the attack effort. This attacker is highly dangerous and poses a threat to the defender.

Finally we consider the fact that well resourced attackers may specifically design ASICs to perform parallel evaluations of possible passwords in an attempt to maximize their gains from compromised authentication data. ASICs are specially designed circuits that are only able to perform a specific function. This specialization greatly improves their speed and efficiency in performing said function at the cost of not being generalizable. Attackers that have this computational capacity have a similar computation speedup as the second type but their value of X is far greater than typically anticipated. This poses an extreme threat to defenders.
Now we consider the costs of memory and how such costs impact the three types of attackers that we have explored. Memory is relatively expensive compared to pure logic and access to them increases the amount of time it takes to perform computation.

The first attacker type is not substantially affected by imposition of memory associated costs. The machines that they use are likely able to handle similar amounts of memory amounts and have similar memory latency as the defender and therefore are not substantially affected by these costs. Therefore, memory cost imposition has negligible effects on this attacker. However, this attacker is already limited in terms of the amount of computational speedup they can acquire and therefore the defender is at relative parity to them.

The second type of attacker however will start to run into problems. Highly parallel general purpose machines do not have much memory dedicated to each parallel execution and memory latency can be a problem in some designs. Depending on the minimum amount of memory needed and the stress put on memory latency in the evaluation of a single hash, the attacker may suffer substantial reductions in the amount of parrallel executions they can perform. In the worst case, they may be unable to use such machines at all and therefore lose all computational speedups that they would otherwise have.

The third type of attacker is far more affected. ASIC memory is not easy to integrate at high amounts. This means that past a certain critical point, the machine must rely on external memory. This imposes memory latency costs on the evaluation of each hash. At the same time, they may also run into bandwidth limitations depending on the amount of parallel evaluations they wish to do. This means that ASICs are far more constrained then the prior two attacker types and suffer more when memory costs are imposed.
Therefore we believe that memory cost imposition in password hashes are a powerful tool in reducing the asymmetry between attacker and defender computational power.

We further detail our estimates and discuss other details like time memory tradeoffs in our accompanying report to this proposal("Estimating the effectiveness of memory cost imposition in slow password hashing").

Call to action:
These theoretical results and estimates are great for defenders. However, these results are useless without algorithms that can perform such cost impositions. We propose two main ways to do so.
1-Use larger hashing operations that have higher memory costs:
An initial retrofit can be done by replacing the cryptographic hashes we are using with new or modified versions that have larger states and repeatedly perform access to multiple locations in memory during computation. We also propose the possibility of using far larger salts or using cryptographic key generation algorithms to create an expanded state that is then hashed back down during each iteration.
2-Explore methods to impose memory costs:
The first method is of limited utility over a longer time span. We are currently investigating the possibility of expensive hashing algorithms that incur an intentionally high amount of memory accesses. We are also exploring methods that force the use of large amounts of memory during the computation of hashes. This effort will require further resources but should result in resilient password hashing algorithms that will help improve the state of defenses.

We also request that password hash storage format standards start providing mechanisms to allow for the storage of memory cost parameters along side the hash. Such support has already been provided for salts and iteration parameters and should be easy to retrofit as needed.
AN: This is me trying to help advance the state of security in Guangchou. At this time, the state of the art was literally stuff like crypt(DES version) and LM hash. The functions used to hash passwords back then were not highly resilient to attack. Even weak stuff like PBKDF2 came out circa 2000s and that stuff should be considered generally inadequate(PBKDF2 can be easily parallelized and cracked even with high iteration counts).

In terms of when this is was created in quest, it was before SCL Snapshot 2. The "WEISOFT CREDENTIAL VERIFICATION HARDENING" thread in there was the result of what the proposal suggested. In quest, I think designers would probably come up and put into use something like bcrypt which came up in 1999. bcrypt does has what we would call cache hardness. The speed of it depends in large part the ability to rapidly access a portion of memory. This portion of memory would fit in a cache but not in registers. This means that at minimum, cracking systems must have either enough fast access memory or perform a time memory tradeoff optimization to efficiently brute force bcrypt hashes. It is still usable in the modern day(but to be honest, please use Argon2id if you need a password hash or slow key derivation function).
Isn't the biggest security flaw for this kind of thing the human factor?
Maybe improving the understanding of security among users ("no don't leave your password lying around on a desk, no your password cannot be 1234 or your date of birth, no don't bring back a external container data in the lab/facility/military base/etc, don't open the suspicious email marked "free nudes pictures"or"you wan a free travel", etc") would allow an increase in the difficulty of penetrating our systems without being too hard to implement?
 
Last edited:
The problem is that there is so much we can do regarding passwords. Users will pick passwords that are easy for them to memorize and trying to increase the guessing entropy can only do so much. Worst case scenario, you get sticky note passwords on the computers and that renders you vulnerable to physical attackers as well as having passwords that are easy to guess. There are other things we could try doing (ie pulling out something like zxcvbn). You also need to consider the fact that people need to be able to easily use their systems. If the password requirements are too hard on the users, expect there to be many support calls and a rollback of policy.

Enhancing the password hash helps to at least alleviate this problem and puts no cost on the users of the system. It also helps enhance the other use case for passwords which is user supplied secrets. This is important for stuff like encryption keys.

As for the other human security problems, that is out of scope for this proposal. There's not much you can do if the attacker gets the password anyways(either malware or phishing). 2FA can help but needs to be implemented correct and etc.
 
Also:
Just a joke there is no such thingas an totaly unhackable comunication system
Depending on the threat model it is possible. So long as you can:
1-Authenticate the other side somehow
2-Trust the hardware before it is connected to the internet during setup
3-Not get your hardware seized by the adversary/side channeled by a planted bug that uses power/other side channels
4-Both sides follow protocol and are not snitches
You can use something like TFC (but replace the key exchange with something like McEliece KEM for quantum resistance) and have unhackable communications. Not easy communications mind you but usable.
Or if you can preexchange data use OTP after generating all the random pad with dice after debiasing the rolls.

PS: Stuff like this is why I don't like it when people talk about criminals using encryption meaning we need backdoors/equivalents(including stuff like metadata access and similar). It's not effective at the end of the day.
 
Tangential note: I had to write up a network proposal and my idea to deal with passwords was to
a) accept they'll be insecure
and b) require either a smart card (something that you have and must be stolen)
and c) require SSO authentication. (Most, like okta, or Office, etc, increasingly require 2FA, so there fore you need a phone authenticator or SMS.

D) SMS authentication is not secure. (the notifs are sent through google or apple; your IMEI can be cloned, but that'd require.. you know.. physical access) etc.

But it's more secure than just a password.

I like the proposal, well written!
 
I was wondering who do we secure our IT against?
I do not deny the need to do so, it is better to get into a good habit from the start, I am just asking who is likely to try to mess with our IT, how (is it really the end if they succeed in obtaining physical access ?) and why?
Also who is likely to succeed, if I understand correctly our things are very advanced for the time and it would be very hard for an enemy to beat our system, right?
 
I was wondering who do we secure our IT against?
I do not deny the need to do so, it is better to get into a good habit from the start, I am just asking who is likely to try to mess with our IT, how (is it really the end if they succeed in obtaining physical access ?) and why?
Also who is likely to succeed, if I understand correctly our things are very advanced for the time and it would be very hard for an enemy to beat our system, right?

Everyone. You secure it against everyone. But in this case, in universe, the people I'd *normally* suspect are the US and the EU, but it's likely the EU and.. probably Japan and Australia.
 
Back
Top