I really don't get a lot of this criticism. For example, who is using iceberg with hundreds of concurrent committers, especially at the scale mentioned in the article (10k rows per second)? Using iceberg or any table format over object storage would be insane in that case. But for your typical spark application, you have one main writer (the spark driver) appending or merging a large number of records in > 1 minute microbatches and maybe a handful of maintenance jobs for compaction and retention; Iceberg's concurrency system works fine there.
If you have any use case like one the author describes, maybe use an in-memory cloud database with tiered storage or a plain RDBMS. Iceberg (and similar formats) work great for the use cases for which they're designed.