At TechEmpower, we frequently talk to startup founders, CEOs, product leaders, and other innovators about their next big tech initiative. It’s part of our job to ask questions about their plans, challenge their assumptions, and suggest paths to success.

The conversations are interesting and varied because they’re about new, exciting, different things. After all, that’s what tech innovation is all about. But there are some common elements too. One common element – and one that’s pretty surprising – is that often, no one has asked these innovators even the most basic technical questions.

Even when they have talked to multiple developers or development firms, we’re often the first to ask basic questions like “Who are your customers?” or “Are you developing for desktop, tablet, mobile, or all three?”

Of course, it’s more complicated than just checking boxes on a question list. The innovator/developer relationship needs to be a conversation. Still, if you’re a business leader and your developers haven’t asked you these questions, look for a Fractional CTO to help navigate the critical early stage of development.

Background Questions

Let’s start with some background questions about the business and product. Think of these as the big upfront questions a developer should ask to get an overall picture.

  1. Who are the customers? What’s their specific need/pain? Can you provide specific examples of different types of customers, what they need, and what the system will do for them?
  2. Tell me about the business. How are you funding this? What level of funding do you currently have? Who’s helping you with fundraising? Do you have legal (Founder Agreement, IP, etc.) in place?
  3. What are your big milestones? Do you have any deals done or in progress that are tied to those milestones? Where are you today, and what’s happening right now?
  4. What’s been done so far to validate the concept?
  5. Who are the other stakeholders involved? Are there other founders, business leaders, partners, or administrators?
  6. How will you be taking this to market? What channels will you use (e.g., Ads, Viral/Social, SEO)? Is anyone working with you on this?
  7. What are your key Startup Metrics? How do you make your money? How do you measure success?
  8. Who are your big competitors? What are some sites or companies in the same space? How will you differentiate from these?
  9. What is different, special here? Where’s the mystery? Do you have a custom algorithm or other technology?
  10. What special data, content, APIs, etc., will you leverage? What’s the state of the relationships that brings you that data? What’s the state of those systems?
  11. Where do you stand on your brand? Do you have a name, a logo, and have you thought about brand positioning? What are some examples of similar brands?
  12. Are there any specific hard dates or important time-sensitive opportunities?
  13. What do you see as your biggest risks and challenges?
  14. What are the key features in each major phase of your application? What functionality would make your company launch-ready?
  15. What has been captured so far? Are there user stories? Mock-ups? Wireframes? Comps?
  16. What problem is your product trying to solve?
  17. If you launched tomorrow, how many users would you forecast? Six months from now? A year from now? How quickly will we need to scale the application?

Questions Developers May Have Forgot to Ask

Here are some additional questions that might have slipped your developers’ minds.

  1. eCommerce
  2. Does your startup run on a subscription model? How many kinds of subscriptions do you support? What are the rules for subscriptions? Do you support discounts? Free trials? Bundling? Coupons?

    Often this ties to marketing support. For example, you might want to offer a discount to a given group to provide incentive.

  3. Targets
  4. Are you developing a native app and/or a web app? Are you targeting desktop, tablet, or mobile? Can you do a hybrid web/native application? Which devices will you test on specifically? Most new sites need to account for mobile delivery – but on the other hand, not every MVP needs both desktop and mobile versions.

  5. Registration
  6. Do you plan to support Google Sign-In, Facebook Connect, or similar 3rd-party authentication? If so, will you also have your own account system? Will you validate new members’ email addresses and/or phone numbers?

  7. Artificial Intelligence
  8. Does your application leverage AI in any way?  For customer service?  To personalize customer recommendations? How can we use AI to improve the customer experience?

  9. Member Profiles
  10. What data is included? Is there a step-by-step wizard? Can members upload their pictures? How much member profile information do you need before allowing a user to register?

  11. Social Integration/Viral Outreach
  12. Is your application tied into any social networks? How tight is that integration? Is it limited to login and Like buttons, or are you building a presence within the social networks themselves? What about other kinds of viral outreach?

  13. Communication/Forums
  14. Are there discussion forums? Commenting? Messaging? If you have boards or comments, do you support flagging? Moderation?

  15. Social Interaction
  16. Do users/members relate to one another? If so, how do they interact? Are users otherwise grouped by the system, maybe by background (employer, university) or preferences?

  17. Internationalization/Localization
  18. Do you anticipate an international audience? How important is support for multiple languages? For multiple character sets? How do we prioritize internationalization versus getting something to market?

  19. Location/Geography
  20. Is your application location-aware? Does it tap into geolocation services provided by the browser or rely on a third-party lookup table? How are you using geographic information? How does the application behave when location data is not available?

  21. Gamification/Scoring
  22. Does your application include any kind of scoring and/or gamification? Are there achievements and badges? Is there a leaderboard for users or teams?

  23. Video and Audio
  24. Are you hosting your own video, or can we use a third-party host like YouTube or Vimeo? Do you need to process user-contributed media? What about reporting and moderation?

  25. Notifications
  26. What notifications does your application need? Are they dismissable? Do they generate emails or push notifications?

  27. Email / SMS
  28. Does your application send out transactional emails or SMS messages? In mass? How are those mass messages crafted? How often is message content updated? Do you need to track views and bounces? What are your privacy rules?

  29. AI Assisted Development
  30. How can we use AI to speed up our SLDC?  How can we leverage AI to get our product to market faster?

  31. Marketing Support
  32. What does your application need to do to help with marketing? Do you have specific landing pages? What are your referral sources, and what tracking do you need around these sources? Do you rely on affiliates? Is there a need for A/B testing?

  33. SEO Support
  34. Will URLs and page content need to be properly formed for SEO? What back-end support for SEO is needed?

  35. Content Management
  36. How often will the application’s content need to change? Who will be doing the changes?  Will you need to add arbitrary new pages? Should content changes be scheduled? Are members contributing content or only system administrators?

  37. Dates and Time Zones
  38. Does the application need to support multiple time zones? Does it need to convert dates automatically?

  39. Search
  40. Does the application include search? What content is searchable? How advanced does it need to be?

  41. Logging/Auditing
  42. What key operations need to be logged for auditing? What needs to be logged for customer support?

  43. Analytics/Metrics
  44. What key startup metrics will you need to track? What metrics will you need for future funding rounds or operations?

  45. Administration
  46. What will you need to do from a back-end? Administer users? Send messages?

  47. Reporting
  48. What needs to be reported? Are CSV/Excel exports sufficient, or do you need something more? Reporting can be endless! Our advice: keep it small to start.

  49. Accounting
  50. Beyond reviewing transactions, what accounting support do you need? Do you need to track inventory? Fulfillment?

  51. Customer Support
  52. Do you need specific interfaces and support for customer service? Do you need a ticket system? What about an AI support assistant?

  53. Security
  54. What are the business / application’s specific security risks? Does it need to throttle potential malicious activity? This generally is a significant discussion itself! Our advice: be pragmatic!

  55. Performance 
  56. What is the expected request volume? What response time characteristics are required? How complex is the application? Complexity can directly impact performance.

  57. Integration Points
  58. What third-party systems will we need to integrate with? How far along are any integration efforts? What is the business relationship with the third parties? Who controls access to the third-party accounts, if any?

  59. Existing Capabilities
  60. What capabilities and personnel do you already have access to? Graphic design? UI/UX design? A Product Manager? How much availability do they have to work on this effort?

  61. Hosting
  62. What hosting requirements does your application have? Do you have any existing hosting relationships?

  63. Platform
  64. Are there pre-existing technical platform decisions that must be considered?

  65. Team and Process
  66. Are you using, or planning to use any software development methodologies? How big is the anticipated development team? How will it be structured?

  67. Product Management
  68. Do you have a clear vision of the initial application and a plan for sequencing changes after the initial launch? Do you have the internal staff to manage changes?

  69. Compliance
  70. What regulatory compliance do you need to support? GDPR? CCPA? HIPPA?

  71. Expansion
  72. What is your vision for the expansion of the application? What features should be in place at launch? Six months from now? A year from now?

Why Your Startup Needs a Fractional CTO

July 10, 2023

Nate Brady

In the fast-paced world of startups, a common mantra is to move quickly, iterate, and stay lean. This often drives an impulse to hire a hands-on lead developer or VP of Engineering. At first glance this makes sense – after all, you need someone who can dive into the trenches and get the product off the ground. While there’s undeniable merit to this approach, a frequent unintended consequence is the creation of a “Founder-Developer Gap.” This divide between the business-oriented founders and the tech-focused development team can stifle innovation and disrupt the realization of the startup’s vision. Our solution? Enter the Fractional Chief Technology Officer.

Bridge the Founder-Developer Gap

A fractional CTO is a part-time role that provides the benefits of a traditional CTO without the full-time commitment or cost. This is particularly helpful for startups that require extensive technical oversight or that don’t have a founder with a strong technical background.Founders usually have a clear vision of the direction their product should take. But when they lack the expertise to understand the technical implications of their decisions, a gap can ensue. This gap can result in miscommunication, incorrect assumptions, and misaligned priorities.

A Fractional CTO bridges this gap by acting as a translator and guide. By talking tech with the dev team and talking business with the founders, they ensure alignment between the original vision and its technical implementation.

The Role of a Startup CTO

A Fractional CTO provides leadership for a startup by addressing these key questions:

  • Cost: How much will it cost to build our product? How can we control costs and still hit deadlines?
  • Strategy and Market Response: Given likely market changes, how should we design and build our systems to adapt quickly?
  • Risk Management: What are our areas of technical risk, and how can we address them?
  • Technology Selection: What technologies will we use? What existing systems will we leverage? What are the potential future integration points?
  • Scalability: How do we anticipate and address potential scalability issues without significant cost?
  • Product Roadmap: How do we manage our product roadmap, balancing short-term progress and longer-term objectives?
  • Team Structure: What will our team look like over time? When will key hires come on, and what capabilities will we need? Do we need a designer? A UI/UX expert? A QA team?
  • Technical Due Diligence: How do we ensure our startup can survive technical due diligence by investors and partners?
  • Innovation and Protection: What specific technical innovations might make sense? What can we build that might be protectable?

A Fractional CTO can help navigate these complexities, allowing founders to remain focused on setting the vision and growing the business.

Unlock Your Startup’s Potential

A Fractional CTO provides experienced strategic thinking on a flexible basis, giving your startup the benefit of their guidance without a full-time commitment. By bridging the gap between founders and developers, they help keep your tech strategy aligned with your business goals. This helps your startup stay agile and competitive in a fast-paced marketplace.

So if you’re in the throes of steering your fledgling startup, consider enlisting the expertise of a Fractional CTO. Their insights and direction might just be the catalyst that propels your startup toward success!

 

Framework Benchmarks Round 20

February 8, 2021

Nate Brady

 

Today we announce the results of the twentieth official round of the TechEmpower Framework Benchmarks project.

Now in its eighth year, this project measures the high-water mark performance of server side web application frameworks and platforms using predominantly community-contributed test implementations. The project has processed more than 5,200 pull requests from contributors.

Round 20 Updates from our contributors

In the months between Round 19 and Round 20, about four hundred pull requests were processed. Some highlights shared by our contributors:

(Please reach out if you are a contributor and didn’t yet get a chance to share your updates. We’ll get them added here.)

Notes

Thanks again to contributors and fans of this project! As always, we really appreciate your continued interest, feedback, and patience!

Round 20 is composed of:

Framework Benchmarks Round 19

May 28, 2020

Nate Brady

 

Round 19 of the TechEmpower Framework Benchmarks project is now available!

This project measures the high-water mark performance of server side web application frameworks and platforms using predominantly community-contributed test implementations. Since its inception as an open source project in 2013, community contributions have been numerous and continuous. Today, at the launch of Round 19, the project has processed more than 4,600 pull requests!

We can also measure the breadth of the project using time. We continuously run the benchmark suite, and each full run now takes approximately 111 hours (4.6 days) to execute the current suite of 2,625 tests. And that number continues to steadily grow, as we receive further test implementations

Composite scores and TPR

Round 19 introduces two new features in the results web site: Composite scores and a hardware environment score we’re calling the TechEmpower Performance Rating (TPR). Both are available on the Composite scores tab for Rounds 19 and beyond.

Composite scores

Composite scores

Frameworks for which we have full test coverage will now have composite scores, which reflect an overall performance score across the project’s test types: JSON serialization, Single-query, Multi-query, Updates, Fortunes, and Plaintext. For each round, we normalize results for each test type and then apply subjective weights for each (e.g., we have given Fortunes a higher weight than Plaintext because Fortunes is a more realistic test type).

When additional test types are added, frameworks will need to include implementations of these test types to be included in the composite score chart.

You can read more about composite scores at the GitHub wiki.

TechEmpower Performance Rating (TPR)

TechEmpower Performance Rating

With the composite scores described above, we are now able to use web application frameworks to measure the performance of hardware environments. This is an exploration of a new use-case for this project that is unrelated to the original goal of improving software performance. We believe this could be an interesting measure of hardware environment performance because it’s a holistic test of compute and network capacity, and based on a wide spectrum of software platforms and frameworks used in the creation of real-world applications. We look forward to your feedback on this feature.

Right now, the only hardware environments being measured are our Citrine physical hardware environment and Azure D3v2 instances. However, we are implementing a means for users to contribute and visualize results from other hardware environments for comparison.

Hardware performance measurements must use the specific commit for a round (such as 801ee924 for Round 19) to be comparable, since the test implementations continue to evolve over time.

Because a hardware performance measurement shouldn’t take 4.6 days to complete, we use a subset of the project’s immense number of frameworks when measuring hardware performance. We’ve selected and flagged frameworks that represent the project’s diversity of technology platforms. Any results files that include this subset can be used for measuring hardware environment performance.

The set of TPR-flagged frameworks will evolve over time, especially if we receive further input from the community. Our goal is to constrain a run intended for hardware performance measurement to several hours of execution time rather than several days. As a result, we want to keep the total number of flagged frameworks somewhere between 15 to 25.

You can read more about TPR at the GitHub wiki.

Other Round 19 Updates

Once again, Nate Brady tracked interesting changes since the previous round at the GitHub repository for the project. In summary:

Notes

Thanks again to contributors and fans of this project! As always, we really appreciate your continued interest, feedback, and patience!

Round 19 is composed of:

Framework Benchmarks Round 18

July 9, 2019

Nate Brady

 

Round 18 of the TechEmpower Framework Benchmarks project is now available!

When we posted the previous round in late 2018, the project had processed about 3,250 pull requests. Today, with Round 18 just concluded, the project is closing in on 4,000 pull requests. We are repeatedly surprised and delighted by the contributions and interest from the community. The project is immensely fun and useful for us and we’re happy it is useful for so many others as well!

Notable for Round 18

Nate Brady tracked interesting changes since the previous round at the GitHub repository for the project. Several of these are clarifications of requirements for test implementations. In summary:

  • Thanks to An Tao (@an-tao), we clarified that the “Date” header in HTTP Responses must be accurate. It is acceptable for it to be recomputed by the platform or framework once per second, and cached as a string or byte buffer for the duration of that second.
  • To keep frameworks from breaking the test environments by consuming too much memory, the toolset now limits the amount of memory provided to the containers used by test implementations.
  • The requirements for the Updates test were clarified to permit a single update. We are still considering whether to classify test implementations by whether they use this tactic.
  • The requirements were clarified to specify caching or memoization of the output from JSON serialization is not permitted.
  • The toolset now more strictly validates that responses are providing the right JSON serialization of responses.
  • Cloud tests in Azure are using Azure’s accelerated networking feature.
  • Postgres has been upgraded to version 11.
  • Nikolay Kim (@fafhrd91) explained the tactics used by Actix to acheive record performance on the Fortunes test.

Other updates

  • Round 18 now includes just over two hundred test implementations (which we call “frameworks” for simplicity).
  • The results web site now includes a dark theme, because dark themes are the new hotness. Select it at the bottom right of the window when viewing results.

Notes

Thank you to all contributors and fans of this project! As always, we really appreciate your continued interest, feedback, and patience!

Round 18 is composed of:

Framework Benchmarks Round 17

October 30, 2018

Nate Brady

 

We’re happy to announce that Round 17 of the TechEmpower Framework Benchmarks project is now available. Since the adoption of Continuous Benchmarking, the creation of an official Round is a fairly simple process:

  1. Try to reduce errors in framework implementations. We want an official Round to have a solid showing by as many frameworks as feasible given limited personnel bandwidth.
  2. Select a continuous run on the physical hardware (named “Citrine”) that looks good and identify its Git commit.
  3. Run the same commit on cloud (Azure).
  4. Write a blog entry and post the Round.

For weeks, we have been stuck at step 4 waiting on me to write something, so I’m going to keep it short and sweet to get this out before Halloween.

Stratified database results

As you review Round 17 results, you’ll notice that Postgres database tests are stratified—there are groups of test implementations that seem implausibly faster than other test implementations.

The underlying cause of this is use of a Postgres protocol feature we have characterized as “query pipelining” because it is conceptually similar to HTTP pipelining. We call it pipelining, but you could also call it multiplexing. It’s a feature of the “Extended Query” protocol in Postgres. Query pipelining allows database clients to send multiple queries without needing to wait for each response before sending the next. It’s similar to batching but provided invisibly in the driver. Thereotically (and we believe in practice) short queries could be sent together in the same network packets.

Importantly (at least for us), the client application’s business logic is unchanged. This is not batching of queries implemented by the application or web frameworks, but rather an optimization provided invisibly by the database driver.

We discussed the pros and cons of permitting this optimization in our tests. While we have disallowed several performance-optimizing tactics elsewhere as violations of the rules or spirit of our tests, we ultimately arrived at permitting this optimization for two reasons. First, it can be applied “seamlessly” to application code. The application code does not need to be aware this is happening. And second, because this isn’t a trick applied at the application tier, but rather an optimization that potentially benefits all applications (once implemented by drivers), we feel this is precisely the sort of improvement this project should encourage.

In short, we think this feature in the Postgres wire protocol is great. We suspect that over time, more platforms will support the “pipelining” capability, gradually re-balancing the stratification we’re seeing.

The effect of this feature is most emphatic on the Multi-query and Updates tests, both of which execute a tremendous number of queries per second. We are also considering adding an attribute to indicate and filter test implementations using traditional database connections versus those using pipelining.

Other updates

  • Round 17 is now measuring 179 frameworks.
  • New languages such as F# are now represented.
  • The results web site is a bit wider in response to feedback.

In the works

We are presently working on a few things that we hope to share with the community soon. These include:

  • Potential changes to the physical environment’s network to allow further differentiation of ultra high-performance frameworks on the Plaintext test in particular.
  • Badges for framework maintainers to share and celebrate their performance tier, similar to popular continuous integration badges seen in GitHub repositories.

Notes

Thank you to all contributors and fans of this project. As always, we really appreciate your continued interest, feedback, and patience!

Round 17 is composed of:

Framework Benchmarks Round 16

June 6, 2018

Nate Brady

 

Now in its fifth year, the TechEmpower Framework Benchmarks project has another official round of results available. Round 16 is a real treat for anyone who likes big numbers. Not just in measured results per second (several metric crap tonne), but in number of tests measured (~1830), number of framework permutations tested (~464), number of languages included (26), and total execution time of the test suite (67 hours, or 241 billion microseconds to make that sound properly enormous). Take a look at the results and marvel in the magnitude of the numbers.

Recent months have been a very exciting time for this project. Most importantly, the community has been contributing some amazing test implementations and demonstrating the fun and utility of some good-natured performance competition. More on that later. This is a longer-than-average TFB round announcement blog entry, but there is a lot to share, so bear with me.

Dockerification… Dockerifying… Docking?

After concluding Round 15, we took on the sizeable challenge of converting the full spectrum of ~460 test implementations from our home-brew quasi-sandboxed (only mostly sandboxed) configuration to a stupefying array of Docker containers. It took some time, but The Great Dockerification has yielded great benefits.

Most importantly, thanks to Dockerizing, the reproducibility and consistency of our measurements is considerably better than previous rounds. Combined with our continuous benchmarking, we now see much lower variability between each run of the full suite.

Across the board, our sanity checking of performance metrics has indicated Docker’s overhead is immeasurably minute. It’s lost in the noise. And in any event, whatever overhead Docker incurs is uniformly applicable as all test implementations are required to be Dockered.

Truly, what we are doing with this project is a perfect fit for Docker. Or Docker is a perfect fit for this. Whichever. The only regret is not having done this earlier (if only someone had told us about Docker!). That and not knowing what the verb form of Docker is.

Dockerificationization.

New hardware

As we mentioned in March, we have a new physical hardware environment for Round 16. Nicknamed the “Citrine” environment, it is three homogeneous Dell R440 servers, each equipped with a Xeon Gold 5120 CPU. Characterized as entry- or mid-tier servers, these are nevertheless turning out to be impressive when paired with a 10-gigabit Ethernet switch.

Being freshly minted by Dell and Cisco, this new environment is notably quicker than equipment we have used in previous rounds. We have not produced a “difference” view between Round 15 and Round 16 because there are simply too many variables—most importantly this new hardware and Docker—to make a comparison remotely relevant. But in brief, Round 16 results are higher than Round 15 by a considerable margin.

In some cases, the throughput is so high that we have a new challenge from our old friend, “network saturation.” We last were acquainted with this adversary in Round 8, in the form of a Giant Sloar, otherwise known as one-gigabit Ethernet. Now The Destructor comes to us laughing about 10-gigabit Ethernet. But we have an idea for dealing with Gozer.

(Thanks again to Server Central for the previous hardware!)

Convergence in Plaintext and JSON serialization results

In Round 16, and in the continuous benchmarking results gathered prior to finalizing the round, we observed that the results for the Plaintext and JSON serialization tests were converging on theoretical maximums for 10-gigabit networking.

Yes, that means that there are several frameworks and platforms that—when allowed to use HTTP pipelining—can completely saturate ten gigabit per second with ~140-byte response payloads using relatively cheap commodity servers.

To remove the network bottleneck for future rounds, we are concocting a plan to cross the streams, in a manner of speaking: use lasers and the fiberoptic QSFP28 ports on our Cisco switch to bump the network capacity up a notch.

Expect to hear more about this as the plan develops during Round 17.

Continuous benchmarking

Introduced prior to Round 16, the continuous benchmarking platform really came into a fully-realized state in the past several months. Combined with the Great Dockening, we now see good (not perfect, but good) results materializing automatically every 67 hours or thereabouts.

Some quick points to make here:

  • We don’t expect to have perfection in the results. Perfection would imply stability of code and implementations and that is not at all what we have in mind for this project. Rather, we expect and want participants to frequently improve their frameworks and contribute new implementations. We also want the ever-increasing diversity of options for web development to be represented. So expecting perfection is incongruous with tolerating and wanting dynamism.
  • A full suite run takes 67 hours today. This fluctuates over time as implementation permutations are added (or deleted).
  • Total execution time will also increase when we add more test types in the future. And we are still considering increasing the duration of each individual benchmarking exercise (the duration we run the load generator for to gather a single result datum). That is the fundamental unit of time for this project, so increasing that will approximately linearly increase the total execution time.
  • We have already seen tremendous social adoption of the continuous benchmarking results. For selfish reasons, we want to continue creating and posting official rounds such as today’s Round 16 periodically. (Mostly so that we can use the opportunity to write a blog entry and generate hype!) We ask that you humor us and treat official rounds as the super interesting and meaningful events that they are.
  • Jokes aside, the continuous results are intended for contributors to the project. The official rounds are less-frequent snapshots suitable for everyone else who may find the data interesting.

Social media presence

As hinted above, we created a Twitter account for the TechEmpower Framework Benchmarks project: @TFBenchmarks. Don’t don’t @ us.

Engaging with the community this way has been especially rewarding during Round 16 because it coincided with significant performance campaigns from framework communities.
Rust has blasted onto the server-side performance scene with several ultra high-performance options that are competing alongside C, C++, Go, Java, and C#.

Speaking of C#, a mainstream C# framework from a scrappy startup named Microsoft has been taking huge leaps up the charts. ASP.NET Core is not your father’s ASP.NET.

Warming our hearts with performance

There is no single reason we created this project over five years ago. It was a bunch of things: frustration with slow web apps; a desire to quantify the strata of high-water marks across platforms; confirming or dispelling commonly-held hunches or prevailing wisdom about performance.

But most importantly, I think, we created the project with a hopeful mindset of “perhaps we can convince some people to invest in performance for the better of all web-app developers.”

With expectations set dutifully low from the start, we continue to be floored by statements that warm our heart by directly or indirectly suggesting an impact of this project.

When asked about this project, I have often said that I believe that performance improvements are best made in platforms and frameworks because they have the potential to benefit the entire space of application developers using those platforms. I argue that if you raise the framework’s performance ceiling, application developers get the headroom—which is a type of luxury—to develop their application more freely (rapidly, brute-force, carefully, carelessly, or somewhere in between). In large part, they can defer the mental burden of worrying about performance, and in some cases can defer that concern forever. Developers on slower platforms often have so thoroughly internalized the limitations of their platform that they don’t even recognize the resulting pathologies: Slow platforms yield premature architectural complexity as the weapons of “high-scale” such as message queues, caches, job queues, worker clusters, and beyond are introduced at load levels that simply should not warrant the complexity.

So when we see developers upgrade to the latest release of their favorite platform and rejoice over a performance win, we celebrate a victory. We see a developer kicking performance worry further down the road with confidence and laughter.

I hope all of the participants in this project share in this celebration. And anyone else who cares about fast software.

On to Round 17!

Technical notes

Round 16 is composed of:

Framework Benchmarks Hardware Update

March 13, 2018

Nate Brady

 

We have retired the hardware environment provided by Server Central for our Web Framework Benchmarks project. We want to sincerely thank Server Central for having provided servers from their lab environment to our project.

Their contribution allowed us to continue testing on physical hardware with 10-gigabit Ethernet. Ten-gigabit Ethernet gives the highest-performing frameworks opportunity to shine. We were particularly impressed at Server Central’s customer service and technical support, which was responsive and helpful in troubleshooting configuration issues even though we were using their servers free of charge. (And since the advent of our Continuous Benchmarking, we were essentially using the servers at full load around the clock.)

Thank you, Server Central!

New hardware for Round 16 and beyond

For Round 16 and beyond, we are happy to announce that Microsoft has provided three Dell R440 servers and a Cisco 10-gigabit switch. These three servers are homogeneous, each configured with an Intel Xeon Gold 5120 CPU (14/28 cores at 2.2/3.2 GHz), 32 GB of memory, and an enterprise SSD.

If your contributed framework or platform performs best with hand-tuning based on cores, please send us a pull request to adjust the necessary parameters.

These servers together compose a hardware environment we’ve named “Citrine” and are visible on the TFB Results Dashboard. Initial results are impressive, to say the least.

Adopting Docker for Round 16

Concurrent to the change in hardware, we are hard at work converting all test implementations and the test suite to use Docker. The are several upsides to this change, the most important being better isolation. Our past home-brew mechanisms to clean up after each framework were, at times, akin to whack-a-mole as we encountered new and fascinating ways in which software may refuse to stop after being subjected to severe levels of load.

Docker will be used uniformly—across all test implementations—so any impact will be imparted on all platforms and frameworks equally. Our measurements indicate trivial performance impact versus bare metal: on the order of less than 1%.

As you might imagine, the level of effort to convert all test implementations to Docker is not small. We are making steady progress. But we would gladly accept contributions from the community. If you would like to participate in the effort, please see GitHub issue #3296.

Framework Benchmarks Round 15

February 14, 2018

Nate Brady

As of 2018-03-13, Azure results for Round 15 have been posted. These were not available when Round 15 was originally published.

What better day than Valentine’s Day to renew one’s vow to create high-performance web applications? Respecting the time of your users is a sure way to earn their love and loyalty. And the perfect start is selecting high-performance platforms and frameworks.

Results from Round 15 of the Web Framework Benchmarks project are now available! Round 15 includes results from the physical hardware environment at Server Central and cloud results from Microsoft Azure.

We ❤️ Performance

High-performance software warms our hearts like a Super Bowl ad about water or an NBC Olympics athlete biography.

But really, who doesn’t love fast software? No one wants to wait for computers. There are more important things to do in life than wait for a server to respond. For programmers, few things are as rewarding as seeing delighted users, and respecting users’ time is a key element of achieving that happiness.

Among the many effects of this project, one of which we are especially proud is how it encourages platforms and frameworks to be fast—to elevate the high-water marks of performance potential. When frameworks and platforms lift their performance ceiling upward, application developers enjoy the freedom and peace of mind of knowing they control their applications’ performance fate. Application developers can work rapidly or methodically; they can write a quick implementation or squeeze their algorithms to economize on milliseconds; they can choose to optimize early or later. This flexibility is made possible when the framework and platform aren’t boxing out the application—preemptively consuming the performance pie—leaving only scraps for the application developer. High-performance frameworks take but a small slice and give the bulk of the pie to the application developer to do with as they please.

This Valentine’s Day, respect yourself as a developer, own your application’s performance destiny, and fall in love with a high-performance framework. Your users will love you back.

Love from the Community

Community contributions to the project continue to amaze us. As of Round 15, we have processed nearly 2,500 pull requests and the project has over 3,000 stars on GitHub. We are honored by the community’s feedback and participation.

We are routinely delighted to see the project referenced elsewhere, such as this project that monitors TCP connections that used our project to measure overhead, or the hundreds of GitHub issues discussing the project within other repositories. We love knowing others receive value from this project!

More Immediate Results for Contributors

When you are making contributions to this project, you want to see the result of your effort so you can measure and observe performance improvements. You also want/need log files in case things don’t go as expected. To help accelerate the process, we have made the output of our continuous benchmarking platform available as a results dashboard. Our hardware test environment is continuously running, so new results are available every few days (at this time, a full run takes approximately 90 hours). As each run completes, a raw results.json file will be posted as well as zipped log files and direct links to log files for frameworks that encountered significant testing errors. We hope this will streamline the process of troubleshooting contributions.

We used run ed713ee9 from Server Central and run a1110174 from Azure.

In Progress

We are working to update the entire suite to Ubuntu 16 LTS and aim to be able to migrate to Ubuntu 18 LTS soon after it’s available. This update will allow us to keep up with several features in both hardware and cloud environments, such as Azure’s Accelerated Networking. Watch the GitHub project for more updates on this as they arrive!

Thank You!

Thank you so much to all of the contributors! Check out Round 15 and if you are a contributor to the project or just keenly interested, keep an eye on continuous results.

Framework Benchmarks Round 14

May 10, 2017

Nate Brady

 

Results from Round 14 of the Web Framework Benchmarks project are now available! This round’s results are limited to the physical hardware environment only, but cloud results will be included again in the next round.

Recent improvements

Our efforts during Round 14 focused on improvements that help us manage the project, mostly by removing some of our manual work.

Continuous Benchmarking

When we are not running one-off tests or modifying the toolset, the dedicated physical hardware environment at ServerCentral is continuously running the full benchmark suite. We call this “Continuous Benchmarking.” As Round 14 was wrapping up, Continuous Benchmarking allowed us to more rapidly deploy multiple preview rounds for review by the community than we have done in previous rounds.

Going forward, we expect Continuous Benchmarking to facilitate immediate procession into community-facing previews of Round 15. We hope to have the first Round 15 preview within a few days.

Paired with the continuous benchmarker is an internally-facing dashboard that shows us how things are progressing. We plan to eventually evolve this into an externally-facing interface for project contributors.

Differences

Contributors and the project’s community will have seen several renderings of the differences between Round 13 and Round 14. The final capture of differences between Round 13 to Round 14 is an example. These help us confirm changes that are planned or expected and also identify unexpected changes or volatility.

We have, in fact, observed volatility with a small number of frameworks and aim to investigate and address each as time permits. Although the benchmarking suite includes two phases of warmup prior prior to gathering data for each test, we might find that some frameworks or platforms require additional warmup time to be consistent across multiple measurements.

Mention-bot

We added Facebook’s mention-bot into the project’s GitHub repository. This has helped keep past contributors in the loop if and when changes are made to their prior contributions. For example, if a contributor updates the Postgres JDBC driver for the full spectrum of JVM frameworks, the original contributors of those frameworks will be notified by mention-bot. This allows for widespread changes such as a driver update while simultaneously allowing each contributor to override changes according to their framework’s best practices.

Previously, we had to either manually notify people or do a bit of testing on our own to determine if the update made sense. In practice, this often meant not bothering to update the driver, which isn’t what we want. (Have you seen the big performance boost in the newer Postgres JDBC drivers?)

Community contributions

This project includes a large amount of community-contributed code. Community contributions are up recently and we believe that is thanks to mention-bot. We expect to pass the milestone of 2,000 Pull Requests processed within a week or two. That is amazing.

Thank you so much to all of the contributors! Check out Round 14 and then on to Round 15!