Going OTA, Infrastructure Refresh, and/or the insanity behind it.

Hello everyone, in this blog post we going to Speedrunning Migraine.

Going OTA, Infrastructure Refresh, and/or the insanity behind it.
Speedrunning Migraine.

Hey guys, It's me, Raphiel, welcome back to Among Us, in today's episode, we're going to speedrunning migraine.

What

In a world where "Triple-A" ROM "developers" trying to pander to buildbots and everyone, it's sure refreshing to have ROM that sets out to do one thing, the best.

So, we boot up Visual Studio Code and we will go through the journey of an Infrastructure Engineer trying the bitter taste of becoming a Data Scientist, Designer, Backend Engineer, and everything in between. If you're reading this, I'm assuming you're probably a nerd since I don't actually want to help people design sophisticated software stacks and have many requirements that not even BBK is willing to do.

I'm here to tell a story and if you're clamouring for something sophisticated and haven't gone the same way and rejected the idea of LineageOS, do yourself a favour.

There are enough nerd hormones here to transition someone, and I can guarantee you result, my fellow sigma engineers.

So whether you're a non-euclidean like me or new to the modern software stack, come with me on this amazing journey through Data Modelling, Design, Engineering, High-Performance Low-Latency software stack, Nonsensically Fucked Up Database Query Design, and Blue Archive's 4th Promotional Video.

For Money is Temporary, but Cunny is Eternal.

The idea

The idea started when on a beautiful day, someone told me, "I may make an updater app", and everything went downhill from there, ideally, as Google told us, you want most of the logic of an updater to be done in Server-side, so we took that seriously.

After a long, rigorous debate of "how the fuck this does this and that does that", we keep going.

Cleveland.

So, our story began in Wyoming, where Yellowstone National Park does a little trolling, causing the tectonic plate to fucking rupture, in an instant, the entire modern age and Christianity in particular, ended.

This, however, created mysterious, volatile storage, called MongoDB, which effectively gave everyone a faster database, what emerged, Hundreds of years later, MongoDB formed a federation, to prevent further challenges to their rules, The Federation tightened its power and relicensed MongoDB, but primarily, MongoDB Federation was just mad that other people were allowed to use the limitless database that they had and wanted to sell it to everyone else with a ridiculous markup.

This sets the scene for a war of independence, in which you got caught in the middle of, or rather, insert yourself for money.

Data Modelling

So, let's start with the idea of how our MongoDB Data Model going to be laid down

Oversimplified and doesn't represent our real database model

As we can gather from that Database Model, our database is laid out into multiple collections, consecutively:

  1. OTA Metadata collection containing Metadata extracted from files, information about the build, etc.)
  2. UserID Collection containing UserIDs used for Beta and Patreon Enrollment IDs
  3. Storage Metadata which comes from our Cloud Storage solution
  4. Miscellaneous Metadata which used for our internal tooling

We are adopting the One-to-Many Relationships with Document References model to simplify our relationship flows and it has the least amount of tradeoffs.

I would say that our database design is quite unique and not for the reason that you would think. Everything in our database design is funnelled directly into a single, robust, multifaceted, multinational, and unilaterally database model, from which the entire database is built around.

"But Raph"

I hear you thinking

"That's every database ever!"

Yes, every good database ever. If I, for instance, went back in time and booted up MySQL, I would be able to do at least, a dozen, unfun activities. Our database design is focused harder than average LineageOS on their local SQLite database.

Fighting MongoDB and Myself

During the development stage of Lanneoliv (Our Monoservice) Kivotos (Component of Lanneoliv that handles OTA), I forgot to change the MongoDB server deployed in both Staging and Production from the pinned version of 4.2 to the latest version.

My Brother in Christ.

Why pinning? Because we have to iterate fast and we can't accommodate having to pull the MongoDB container on a magical day it suddenly has a version bump during busy staging iterations, and yeah, it ended up doesn't really matter as we can go forward with the latest (MongoDB is at version 6.2 at the time of this writing)

Fighting MongoDB Query

During the development iteration of Kivotos itself, we struggled with one of the pain points of every database operation, Query.

50.

MongoDB Query itself can be done in multiple ways, and the idea of Query and Aggregation being different made this way more confusing than it should be.

For now, we stayed with Query rather than going Aggregation, yes, I know that conditionals are cool and all, but translating Queries into Aggregation isn't the most trivial thing for us.

Thus, we get a final query that can be summarized into a combined query

speedrunningAMigraine := bson.M{
  "$and": []bson.M{
    timestampCheck,
    buildCheck,
    preConditionCheck,
    sdkLevelCheck,
    betaCheck,
    approvalCheck,
  },
}

Protobuf, and how Google doesn't even properly use them

So, in the next adventure, we go to play "type translations"

Yes, pain.

Google's OTA Metadata specifications are laid out as follows:

This proto is embedded in every build and can be parsed for gathering the information of the build itself.

manifestRaw := make([]byte, 0)
buffer := make([]byte, 1024)
for {
  n, err := r.Read(buffer)
  if n == 0 || err == io.EOF {
    break
  }
  manifestRaw = append(manifestRaw, buffer[:n]...)
}

var manifest protobuf.OtaMetadata
err := proto.Unmarshal(manifestRaw, &manifest)
if err != nil {
  return err
}

return &manifest

Even though most of this is parseable, but SDK Level and Timestamp have left me in a perpetual state of sophisticated malding as I have to convert the SDK Level from string to integer and make another relationship for the converted Timestamp as the Timestamp in Google's OTA Metadata is a Unix Timestamp.

Another issue is that the Client also hasn't been able to return the PartitionState of the device, this is a future us's TODO.

Meanwhile, the OTA metadata itself is laid out as follows:

Yes, Brick OTA is canon to Android lore.

With this being understood, we go to the logic of how the checks will be laid out, so, to give another oversimplified flow of how we can use this, here is the flow

Banjo Kazooie

V1, It's V0 but simpler and yet sophisticated.

V0 but Twitch

As we progress our knowledge of how to roleplay as a dragon, having multiple foreplay with the OTA Metadata, and also watching some Rat Porn and also Intersex Artic Fox Porn, we continue our journey with implementing Seamless Streamed OTA.

πŸ’‘
This is also the point where we figured that we don't actually have to return everything to the client as the entire flow logic is done in Server-side because Google said so.

Parsing Payload Metadata

So, do you like parsing strings? I don't.

Let's begin with actually looking at how Google would lay out their metadata in the package

bro

Have you looked at that and thinking

Damn that looks ass

Yes, indeed. So let's break that into something saner

s := strings.TrimSpace(input)
files := strings.Split(s, ",")

for _, file := range files {
  parts := strings.Split(file, ":")
  offset, err := strconv.ParseInt(parts[1], 10, 64)
  if err != nil {
    return err
  }
  size, err := strconv.ParseInt(parts[2], 10, 64)
  if err != nil {
    return err
  }

  [...]
}

Which going to return as follows

[
    {
        "filename": "payload.bin",
        "offset": 679,
        "size": 343
    },
    {
        "filename": "payload_properties.txt",
        "offset": 378,
        "size": 45
    },
    {
        "filename": "payload.bin",
        "offset": 69,
        "size": 379
    }
],

OK, now it's better. Then we throw that to payload_properties until we noticed that letting the update engine seeking payload_properties.txt is not the brightest thing to do, so let's take a look at what payload_properties.txt content is in general

FILE_HASH=clGjz1kJ/Toxcax0Ap8d2cCVupI1xoBBXgqOzNK+IeQ=
FILE_SIZE=1345770359
METADATA_HASH=EG0gbI1eQ5PCQhcOovjiP8zK1H14T6CL8znOwAnQRnE=
METADATA_SIZE=98416

This is an example, you can't use this lmao

Before you ask, yes, I know that I can parse this from the payload itself from the header of the payload which is laid out as follows

struct delta_update_file {
  char magic[4] = "CrAU";
  uint64 file_format_version;  // payload major version
  uint64 manifest_size;  // Size of protobuf DeltaArchiveManifest

  // Only present if format_version >= 2:
  uint32 metadata_signature_size;

  // The DeltaArchiveManifest protobuf serialized, not compressed.
  char manifest[manifest_size];

  // The signature of the metadata (from the beginning of the payload up to
  // this location, not including the signature itself). This is a serialized
  // Signatures message.
  char metadata_signature_message[metadata_signature_size];

  // Data blobs for files, no specific format. The specific offset
  // and length of each data blob is recorded in the DeltaArchiveManifest.
  struct {
    char data[];
  } blobs[];

  // The signature of the entire payload, everything up to this location,
  // except that metadata_signature_message is skipped to simplify signing
  // process. These two are not signed:
  uint64 payload_signatures_message_size;
  // This is a serialized Signatures message.
  char payload_signatures_message[payload_signatures_message_size];
};

Yes, it's easier to just parse this directly from the payload_properties.

The value from payload_properties is a key:value separated by a "=", we can just loop through that and get everything we need.

// loop through each line and assign values fields 
// based on key-value pairs separated by "=" sign
for _, line := range lines {
  // find the index of "=" sign in the line
  index := strings.IndexByte(line, '=') 
  // check if "=" sign is found
  if index != -1 {
    // slice the line before "=" sign as key
    key := line[:index]
    // slice the line after "=" sign as value
    value := line[index+1:]
    switch key {
    [...]
    }
  }
}

Praising the Lamb.

One of the key requirements that we need as part of the OTA Delivery Project Requirements is Patreon authentication, this one is fairly simple yet sophisticated.

We start with digging out about Patreon API and OAuth

Me when uuhhh API

After digesting that and doing some research on Patreon Developers Forum because parts of the documentation are incomprehensible even for non-euclidean, we can lay out the ideas as follows

Starting a Cult.

After getting that, the implementation is fairly "simple".

// Use the obtained access token to make API requests to the Patreon API
userClient := patreon.NewClient(userTokenClient)
creatorClient := patreon.NewClient(creatorTokenClient)
userResponse := userClient.FetchIdentity(fieldOpts, memberOpts, includeOpts)
for _, item := range userResponse.Included.Items {
  id := item.(*patreon.Member).ID
  campaignData, creatorErr := creatorClient.FetchCampaignMember(id, memberOpts)
  if creatorErr != nil {
    continue
  }
  if campaignData.Data.Attributes.LastChargeStatus == "Paid" &&
    campaignData.Data.Attributes.PatronStatus == "active_patron" {
    [...]
    return ctx.Redirect(redirectionURL)
  }
}

Intent

To launch the Updater and set the ID, we need to launch the Updater package with the intent to set the ID, this part is fairly self-explanatory, if you don't get this part, consider Intents and Intent Filters  |  Android Developers.

Conclusion

Well, this might be a living post, and as we can see, writing a properly designed and implemented OTA Backend is quite trivial for a non-euclidean (This pretty much replicates what Google does, except, this was done by a single non-euclidean), with the help of a lot of alcohol (I'm not joking), Blue Archive, and Neural Cloud,οΎ something Lineage wouldn't ever possibly do.

It's worth it to spend money on us, I'm serious, and that's why you should give us your money so we can spend it responsibly for more infra.

I would like to thank the kind gacha players and furries of the hentaiOS Patreon for funding my hopeless and insane addiction to make another Google but in hell.

If you would like to help fund the project, corrupting LineageOS users and maintainers in the name of user build and functional download resume in the OTA client, you can head to the bottom of the post and go to our Patreon to learn more.