Reinvent from NZ is > than Reinvent from Vegas.
Am I right?
Ok, I might be clutching at straws here, but when you can’t make it to Vegas, being able to watch remotely is great. We probably forget that it wasn’t always this way…
As CEO Matt Garman gets underway, as expected, AI-related topics are dominating.
This includes a long segment on the hardware/chipsets AWS has been designing for AI. This includes the Trainium2 chipsets and Trainium2 Ultraservers which are connected via neurolink and allow the consumption of processing beyond the capacity of a single server.
Apple presents on Apple Intelligence, and how this is running on AWS, including an expected 50% efficiency gain from Tranium2
Funnily enough (is it though?), the next announcement is Tranium3 which will be coming next year - late 2025. I bet they’re going to be faster…
The conversation moves onto S3 - a gift to the tech world when it was delivered in 2006. S3 has continued to develop with various tiers and offerings, including S3 Intelligent tiering, which has provided $4B+ in customer savings. S3 supports millions of data lakes globally.
S3 is loved because it just works.
However, AWS asked themselves, how can we make this better for analytics workloads including tabular data like parquet files? Today this is often queried using Apache Iceberg.
And with that, we have the announcement of S3 Tables which is 3x faster query performance and up to 10x higher transactions per second for Apache Iceberg Tables. And this is GA now.
The next rhetorical question posed is:
“What if we could make managing metadata easier?
We’re presented with the example of a picture that contains metadata such as location, file size, colour mode, format and image resolution.
And with that S3 Metadata is announced. When you have an object, AWS takes the metadata associated with that file and stores it in a new table bucket (an Iceberg table) and then you can use your favourite analytics tool to query this table and find the data you are looking for. S3 updates the metadata every few minutes.
We have AWS customer Sierra (https://sierra.ai/) presenting in a brief snippet. Initially, your digital customer experience was delivered through websites, then Mobile applications, but now it will be with AI agents in Sierra State.
DynamoDB was built after Amazon determined that 70% of their database transactions were simple key/pair operations, which led AWS to create a purpose-built database rather than leverage a relational database. This kickstarted AWS building a number of other purpose-built databases, as well as continuing the development of relational databases. On that note, Aurora is now 10 years old.
So what is the perfect database?
Which typically results in trade-offs. ORs, rather than ANDs… ( we know this saying: you can have it cheap, high quality and fast - but choose two)
When committing a transaction across multiple regions this results in 10 roundtrips, which is then multiplied by the latency between the regions making writes SLOW.
So AWS has decoupled storage from processing, which leads to a second problem - time drift. AWS has added a hardware reference clock within their hardware that syncs with time from satellites for microsecond accuracy to prevent clashes.
And…… Amazon Aurora DSQL is announced. Benchmarked against Google Spanner (which was revolutionary when released) Aurora has 4x faster reads and writes.
Amazon DynamoDB Global Tables is also announced providing options of Aurora DSQL for globally distributed SQL and DynamoDB Global Tables for NoSQL.
Core building blocks have traditionally consisted of compute, storage and database. Matt’s view is that inference will equally become part of every application, and that apps with AI and apps without AI will be a thing of the past.
Which is why AWS created Bedrock. One technique being used is to take a large teacher model (LLM) and distill this into a smaller student model which is faster but simpler.
Amazon Bedrock Model Distillation is announced which provides model distillation for you and is 500% faster, and 70% cheaper than the model they got distilled from.
Amazon Bedrock Knowledgebase allows you to bring your own business data and Amazon Bedrick Guardrails provides safeguards for your generative AI applications. For example, you can prevent your models from answering questions about irrelevant topics such as politics or medicine.
However, AI can have hallucinations which is another concern businesses have. An example is a customer asking if something is covered by their insurance after a leak in their bathroom. The answer needs to be accurate. Automated Reasoning is a form of AI that proves a system is working the way it is intended to. This leads to the announcement of Amazon Bedrock Automated Reasoning Checks which develops rules based on your policies. You then tune the rules. Then if Automated Reasoning Checks are in place it will continue to prompt customers or defer answers that it cannot be sure are correct, rather than providing a potentially wrong answer.
The use of AI Agents is growing rapidly. These agents are great for simple tasks but if you have complex workflows you can have lots of tailored agents for each use case. Amazon Bedrick Multi-Agent Collaboration is announced which is a “supervisor” or “brain” of sorts across multiple simple agents you might have allowed multiple specialised agents to now be able to complete complex tasks.
Andy Jassy, President and CEO of Amazon is brought on stage and talks about how AI is being used through the broader Amazon product sets and back-end operations.
Andy announces Amazon Nova Foundation models including:
Essentially a raft of readily available models to get started with Gen AI, quicker and more easily.
And that’s the end of my highlights reel for the CEO Keynote. There were some nice announcements in there, maybe nothing earth-shattering from leftfield. The strong focus on AI capabilities was in there, maybe with more of an orientation towards AWS’s traditional “builders” rather than the end consumer focus we’ve seen elsewhere, but that means we might expect a lot of the AI-powered products we get in our hands over the coming years might be powered by AWS services in the backend.