microcode fixes in CPU can only work on the L1 caches, it does not work at all on any other caches and in external caches (including application caches in user-mode code, bus registers/latches and fast adapters, and caches in external routers/gateways/file servers, remove application servers using for example REST API)...
There are MANY caches everywhere and the solution to fix them is not in the solicit or microcode, but in the software itself (the hypervisor, the OS, the drivers, the applications, the remote services, the various backend servers). Even an SQL database or a filesystem can be attacked by time-based side channels.
All caching designs should include their own cache eviction policy and allow segregating caching levels according to the security profile of their clients they want to isolate and from the inner service itself which much be protected and not attackable by any client. This requires extendded "tagging" for each usage, and slow and randomized recollection of unused tagged areas that can be later reused in *really unpredictable* time by other applications/clients/services.
But adding tags means that you have to secure the quotas allowed for use by anyoine and making sure that no one (not even the hypervisor) can alter the quota assigned to the other. This means that the cache may have to contain "duplicate" entries for the same data, but with different tags and different eviction policies.
And this can be a severe problem for application servers that need to service many clients. Even a search engine like Google servicing millions requests each second cannot create as many segrated caching areas for each client, without forcing all of them to have an extremely low quota of use: and then the caches will be experimenting very high level of cache mises for everyone, while having many segregated parts still in idle state (but kept in their state for very long time to protect them from third party attacks).
If this is implemented, then Google will have to sell services with warrantied response time/performance and available resources. And this service cannot work with the "free/no-cost" model just paid by advertizing. The Google farms would also not be enough to support the load: centralized artcihtecures and clouds in general will no longer work, unless Google starts selling services using distributed peer-to-peer devices (its client buy a specific device that will be for their exclusive use and will be deployed in their own premises, or will be rented at high prices in the few colocation areas, where the client will pay also the servicing, the energy used for a device that will be idle most of the time but not reusable by any other Google client).
Dedicated servers are for now the only solutions for serious web servers, and they should have their own hardware, own memory, own storage, own backup solutions, own firewall... This will be much mroe costly than existing "cloud hosting" proposed now.
If customers need to deploy their own device (sold and preconfigured by Google for immediate use) or a colocation area to have it installed there, the billing won't be the same and there will be scarce resources for such sales (many more colocation areas will have to be deployed around the world, Google will need to employ much more people to service them, it may be good however for job offers!). But energy efficiency gains will be lost except for very big organizations that will want to connect their own private "supercomputer", will probably won't need Google as a third party, on a device on which Google will not even be allowed to collect user profiles or distribute advertizing).
"Spectre" for me means this is the end (agin) of centralized computing, its reintroduction as "clouds" instead of the former "mainframes" was a myth, it won't survive long. And probably the whole concept of Internet (the way we know it today) is dead. and may be it's a good time to reintroduce "true-life" socialisation, with direct man-to-man interactions and limited delegations of trusts to small circles (do you remember that Google introduced "circles" than decided to kill it by first tweaking it for advertizers, then closing Google+ because of third party abuses?)
And any way we need to reinforce the privacy rules: the RGPD was just a phase 1. We must go further by intriducing proof of delegation and forbidding transitive delegations of trust. Anonymity can be preserved outside the allowed "circles", but circles cannot work without proof of identity and circles must not be freely extensible by any one of its members, cicles have to become private for each user. Intruders not allowed, means the end of the "open" internet (in reality only open to big data players choosing their own contracts unilaterally without giving choice).
Now look at what Intel does : it proposes OSS solutions, but then further restricts them with exclusive patenting rights, so much that it does not even allow users to publish any discovery of what Intel made bad, or publishing any benchmark for Intel services. They are building a legal wall of lies allowing Intel to say and sell want they want without any form of liability (Intel just says "buy it, if it breaks, it's your fault, and if it breaks my Intel service, you'll have to pay me unlimited fees for damages, and if you tell about this Intel will seize you, including your data, and your OSS licence will be voided as well, and you'll have to pay Intel for any other third-partyy to whom you've distributed the OSS solution, and Intel will also prosecute them to force them to pay Intel").
Intel does all that because it knows that it is in severe troubles, and may loose the commercial battle rapidly face to AMD, or Chinese, Korean and Russian foundries. Intel would have no other choice than abandonning the x86 architecture and convert itself to ARM only or buy licences to AMD or Chinese foundries... but for now, as there are issues as well in AMD and ARM, Intel thinks it can resist (but this won't be long before there's a huge attack, and I'm convinced that Google, Facebook, Apple, IBM and Samsung are already working to build their own architecture using a very different paradigm (we know that Google and IBM are working on quantic computers, others may be working on peer-to-peer distributed architectures notably Amazon for its B2C sales and will implement their own P2P networking protocol).