We’re pleased to announce Round 22 of the TechEmpower Framework Benchmarks!
The TechEmpower Framework Benchmarks project celebrates its 10th anniversary, boasting significant engagement with over 7,000 stars on GitHub and more than 7,100 Pull Requests. Renowned as one of the leading projects of its kind, it benchmarks the peak performance of server-side web application frameworks and platforms, primarily using tests contributed by the community. Numerous individuals and organizations leverage the insights from The TechEmpower Framework Benchmarks to enhance their framework’s performance.
Microsoft has been steadfast in their dedication to improving the performance of their .NET framework, and has been active in the Framework Benchmark community to further this goal. With the announcement of the release of .NET 8, it is clear that performance is paramount.
Here are some updates from our contributors
@franz1981 on GitHub, @forked_franz on Twitter:
Right after Round 21 I’ve worked on the 3 projects delivering:
- an improvement on HTTP parsing for Netty (affecting every Netty-based frameworks, actually, including Vertx and Quarkus), to make it more branch-pred friendly: https://github.com/netty/netty/pull/12321
- found (and fixed) a 20 years old bug affecting a lot of Java programs (and Netty HTTP encoding and Quarkus’s ORM ie Hibernate), see https://github.com/netty/netty/pull/12709 and https://www.youtube.com/watch?v=PxcO3WHqmng&ab_channel=DevoxxUK (and my mate Sanne G. worked to fix the hibernate part -> https://github.com/Sanne): this has delivered a gigantic improvement, especially on Quarkus, better explained at https://redhatperf.github.io/post/type-check-scalability-issue/)
- replaced epoll read/write with recv/send -> https://github.com/netty/netty/pull/12679 (which has delivered a 5-10% improvement on all Netty based servers)
- I’ve been mentor of GSoC 2020 bringing io_uring into Netty (https://netty.io/wiki/google-summer-of-code-ideas-2020.html#add-io_uring-based-transport) and ported it (with @Julien Viet ) in Vertx at https://github.com/vert-x3/vertx-io_uring-incubator
All these changes has improved the performance of the mentioned frameworks from 40% to 200% depending on the test
Oliver Trosien says:
I would like to use that opportunity to highlight Scala’s “new kid on the block”, Pekko, which is a fork of Akka, and currently undergoing incubation as Apache project. One of the reasons for contributing it to the Framework Benchmarks, was to verify no obvious performance regressions were introduced in the process of forking, and the results look good! Pekko is very much en-par with its legacy counterpart.
List a few of Rust’s performance optimizations.
In a real production environment, several approaches can be tried to optimize the application:
- Specify memory allocators
- Declaring static variables
- Putting a small portion of data on the stack
- Using a capacity to new vector or hash, a least capacity elements without reallocating
- SIMD
@fakeshadow says in response:
In general you should not take anything from tfb benchmark and simply consider it useful in real world. Context and use case determines how you optimize your code.
btw: xitca-web (bench code not including dependencies) does not do 2,3,5 and still remains competitive in micro bench can be used as a reference.
I have written a blog post about TFB and object pascal – yes, we added our object pascal framework in round 22! About how we maximize our results for TFB, we tried several ways of accessing the DB (ORM, blocking, async), reduced the syscalls as much as possible, minimized multi-thread locks especially during memory access (a /plaintext request requires no memory allocation), and made a lot of profiling and micro-optimizations. The benefit of the object pascal language is obvious: it is at the same time a high-level language (with interfaces, classes and safe ARC/COW strings), safe and readable, but also a system-level language (with raw pointers and direct memory buffers access). So the user can write its business code with high level of abstraction and safety, but the framework core could be tuned to the assembly level, to leverage the hardware it runs on. Finally, OpenSource helped a lot having realistic feedback from production, even if the project and the associated FreePascal compiler are maintained by a small number of developers, and object pascal as a language is often underestimated.
Notes
A heartfelt thank you to all our contributors and fans! We recognize the complexities involved in executing a benchmarking project accurately. While it’s challenging to meet everyone’s expectations, we are committed to continual improvement and innovation, made possible with your invaluable support and collaboration.
Round 22 is composed of:
Run ID: 66d86090-b6d0-46b3-9752-5aa4913b2e33 on our Citrine environment.
Follow us on Twitter for regular updates.