10 days agoby rxliuli
I evaluated D1 for a project a few months ago, and found that global performance was pretty terrible. I don't know what exactly the issue with their architecture is, but if you look at the time-to-first-byte numbers here[1], you can see that even for the D1 demo database the numbers outside Europe are abysmal, and even within Europe having a TTFB of > 200ms isn't great.
This post helps understand some basic DB pitfalls for frontend developers, but I wouldn't use D1 regardless. If you can figure out how to use D1 as a frontend dev, you can use a hosted Postgres solution and get much more power and performance.
[1] https://speedvitals.com/ttfb-test?url=https://northwind.d1sq...
7 days agoby fastball
Has anyone tried analyzing Durable Object with SQL storage performance? Is it as bad as D1?
6 days agoby your_challenger
Another fun limitation is that a transaction cannot span multiple D1 requests, so you can't select from the database, execute application logic, and then write to the database in an atomic way. At most, you can combine multiple statements into a single batch request that is executed atomically.
When I needed to ensure atomicity in such a multi-part "transaction", I ended up making a batch request, where the first statement in the batch checks a precondition and forces a JSON parsing error if the precondition is not met, aborting the rest of the batch statements.
SELECT
IIF(<precondition>, 1, json_extract("inconsistent", "$")) AS consistent
FROM ...
I was lucky here. For anything more complex, one would probably need to create tables to store temporary values, and translate a lot of application logic into SQL statements to achieve atomicity.6 days agoby kpozin