How does nyt paywall work




















As much as 13 per cent of digital subscriber growth arrives from outside the U. That means the non-US digital subscriber population is approaching , Nowhere near as many Americans are buying the print version of the paper, but its worldwide reach is clearly at an all time high. The dilemma for the NYT now is how to continue to grow that subscriber base.

If print revenues continue to drop, and its display advertising falters, then it faces a huge challenge in standing still in terms of generating income, let alone grow. One option may be the development of niche subscriptions. So readers could pay smaller figures to access sports, culture or even local New York news.

The company has experimented with subscription based apps with varying degrees of success. But this is clearly an option going forward. The paper might have to more aggressively market its paywall. For example it could potentially halve the number of articles readers get for free. It has done this before reducing the figure from 20 to where it is now at The WSJ and FT are taking a spines-out approach, on the theory that the pain of not reading their content will force people to pay.

The NYT is taking a more open-door approach, on the theory that the pleasure of reading its content will be enough to persuade a large number of people to pay.

Nick Rizzo has collated some thoughts on the NYT paywall from people in the key demographic between 25 and 30 years old, all of whom are paying for the digital-only version of the NYT. It just seems wasteful. The New York Times is my number one source for news and I appreciate the service it provides.

I also buy a lot of music, because I like the product, understand the incentives involved, and want its production to continue. Any work-around to avoid the paywall would still cost me precious minutes. I subscribe to the Weekender indeed, to the slightly cheaper Sunday-only edition , which is the cheapest possible way to give myself online access. When something goes wrong, we need to see why the problem is happening and if action needs to be taken — PagerDuty works well for that.

Getting different perspectives to vet our solution and make sure we had a plan for long-term stability and support were all essential for a smooth launch. Here are a few of the areas we made sure to cover. The Times engineering group believes strongly in the Request for Comments RFC process because it helps bring visibility to a major architectural change and engages colleagues in decision making.

An RFC is not very complicated; it is simply a technical document that proposes a detailed solution to a specific problem. We shared our RFC with all of our colleagues in the Technology department and asked for their feedback. Before we started coding, we spent several weeks preparing a couple of RFCs that outlined our plan to rewrite the Meter Service.

This included documenting the current system and dependencies, analysis of multiple proofs of concept and creating clear architectural diagrams of our new solution. The time it took to solicit feedback was worth it. We wanted to have several testing layers that we could automate because who enjoys manual testing and included coverage from unit tests, contract tests, load testing and end-to-end functional testing.

With our testing strategy vetted by our RFC process, we could prove feature parity with the old meter and be confident in our comprehensive test suite. Since we had little to work with, we decided we wanted to better understand how the existing service performed for our users. This helped us determine not only what to watch for, but what benchmarks we should set for acceptable behavior. Before rolling out our new service, we put in place dashboards in Chartio for client-side data and Stackdriver for service-level data that allowed us to observe any changes to existing metrics.

The prep work we did allowed us to move extremely quickly to rebuild the service, spin up the new infrastructure and create a solid deployment pipeline. The next step was to launch, but we wanted to make sure we did it right to avoid a high-stress and high-stakes situation.

Our challenge was to guarantee the new Meter Service performs at par or better than the old service, and to ensure the response is identical between the two services for every user. The graphic above shows the call to the legacy service and the additional silent call we added to the web client that went to the new service. The call was non-blocking and had no impact on the actual user experience.

The responses received from each service were tracked and compared, and we used this opportunity to load-test the new service to see how well it performed during real news-related traffic spikes. The benefit of this approach was that the legacy service was still functioning throughout, which meant we had the freedom to modify the new service without worrying about impacting users.

We could even take it down if we needed to make configuration changes to the infrastructure, such as auto-scaling policies or instance sizes. We let this run for a couple weeks until we ironed out the last bugs and felt confident that we could proceed with a phased rollout. We accomplished what we needed from the dark rollout and felt ready to start relying on the response from the new service.

Within the web client integration itself, we routed one percent of traffic to the new service and the remaining 99 percent to the legacy service.

Additionally, we added a back-end validation layer, which allowed us to compare the new and legacy service results to make sure they completely matched. Once we knew we were good on our website, we moved on to test all other applications.



0コメント

  • 1000 / 1000