"as typically tables": you said it yourself, you want to be able to serialize arbitrary data, as efficiently as possible. This is not limited to just discrete numbers (and their subtypes like booleans), and "strings". And all data is not just a flat array with known size (or predictable maximum size), even Lua tables are more complex than that.
Even the representation of numbers is not just limited to a single fixed-size array of bits.
Just consider how you would represent a set (of elements in an enumeratable space of extremely large cadinality): you would not use a simple bitset, you would need to make it sparse for all practical use, and would still need to be able to perform some arithmetics on them (at least the intersection and a binary complement). The set traditionally has a boolean property on each of them (element belongs to set or not) but the boolean could as well be a probability or presence.
I pointed you to some existing researches (by Google notably, but also for cloud storage in social networks: they have practical uses, save a lot of storage, lot of dollars and energy, allow building more resilient systems by storing also autocorrecting codes, resisting to hardware or transmission failures: the sape you save by compression can be reused to store the redundancy correcting codes, which also allow distributing the data over more devices and allow better scalability and strong resistance to external attacks, without costing really more).
Just consider GIF, JPEG, MPEG, and even PNG, they are now challended by WEBP, HEIC/HEIF (both based on HEVC), AVIF.
Now consider how these new (open) formats are built: at the lowest level, they have abandonned the 2-complement notation of integers, they have integrated compression and resilience with autocorrecting codes or strong data signatures.