Newsletter Subject

3 Reasons Publishing And Ad Data Loses Value

From

adexchanger.com

Email Address

email@adexchanger.com

Sent On

Tue, Sep 18, 2018 07:32 PM

Email Preheader Text

“The Sell Sider” is a column written by the sell side of the digital media community. Spon

“The Sell Sider” is a column written by the sell side of the digital media community. Sponsor Message [GeoEdge Revolutionizes User Protection]( [Block auto-redirects, offensive ads and malicious ads in real time!]( []( [AdExchanger | The Sell Sider] “[The Sell Sider](” is a column written for the sell side of the digital media community. Today’s column is written by Michael Manoochehri, chief technology officer at[Switchboard Software](. Why do data projects continue to fail at an alarming rate? Gartner [estimates]( that through 2017, 60% of data projects never get past preliminary stages. There are common reasons these projects struggle, but when it comes to data, the advertising industry is anything but common. Our industry is experiencing a historic surge in data volume. Complexity is growing due to the success of programmatic advertising, and publishers are demanding access to larger numbers of data sources. From a data perspective, I believe there are three main barriers that prevent publishers from maximizing the value of their data, and it’s time we start talking about them. While there’s no quick fix to solve these problems, there are practical steps publishers can start taking now to prevent their data from losing value. Granularity barrier Obtaining a unified view of how content relates to revenue requires a combination of multiple, programmatic data sources. Most supply-side platforms (SSPs) provide reporting data in some fashion, but there remains a daunting lack of standard granularity in available data. Some SSPs provide data broken out to the individual impression, while others only provide daily aggregates. Granularity mismatches become an even greater challenge when each source generates different versions of currencies, date formats, time formats, bid rate formats and so on. These differences add up fast, and when they do, the inability to build a unified view of all data lowers its overall value. To solve data granularity issues, publishers must apply business rules to normalize their data. These rules can be defined by options in a vendor's UI, SQL code in the data warehouse or pipeline code by an engineer. Business rules describe how data “should” look – bid rates as decimal values versus percentages, for example – from a given source. Lack of visibility into where business rules are defined can cause costly problems. Time and time again, I’ve observed engineering teams debate with C-level executives about inaccuracies in revenue reporting. The reason is often because somewhere along the data supply chain, a business rule changed in an undetectable way. To prevent granularity from becoming an issue, there must be transparency for business rules. To get started, I suggest going through the exercise of simply accounting for all steps in the data supply chain to document how rules are being applied, and by whom. Knowing how business rules are used for normalization is essential to preserving the value of the data. API barrier SSPs often promote how accessible their data is through application programming interfaces (APIs), but sometimes that’s not the reality. Publishers and advertisers rely on a network of multiple, heterogeneous data sources, but many are completely unprepared for the rate of change, problems and quirks exhibited by SSP APIs. SSP APIs can and will change or break. As the number of APIs under management grow, the possible points of failure multiply, creating a distributed systems problem. Laws like the General Data Protection Regulation also require that API credentials and data access are managed securely using best available practices. Ultimately, any time a team can’t contend with errors or downtime from a given source, its data loses value. I've met many talented engineering teams who struggle to understand how much of their API-based reporting works, so it should come as no surprise that line-of-business leaders often feel in the dark as well. To prevent API challenges from becoming too daunting, I suggest that engineering teams proactively develop both processes to monitor API health and data operations playbooks to react to changes and outages. They should ensure that API credentials are not tied to user accounts and that they are stored with secure password management software. Playbooks are crucial for engineering to diagnose problems and for the line-of-business manager to understand what’s going on. They also serve as excellent handover documentation if there is engineering churn. Scale barrier Advertising data volume is exploding, and heterogeneous data sources are proliferating. This one-two punch puts up a tough scalability barrier that’s difficult to fight through. Ultimately, delivering scalability comes down to smart infrastructure planning. However, many businesses think until they need to scale, there’s no work to be done, which is a dangerous mistake. Keep in mind that not all infrastructure products – however helpful they may be – are suitable for scaling past a certain size. Scale problems hit unexpectedly, and when they happen, queries crawl to a halt, dashboards fail and data is lost. To prepare for scalability challenges, publishers should start measuring now. Specifically, they must understand how much data they have, how queries are performing and how responsive their dashboards are. Can their infrastructures handle unexpected spikes in volume, whatever those spikes might look like? They should walk through a use case of what they would do, should their data exceed current capacity. Publishers should also know that there are significant performance differences between standard on-premise databases and cloud data warehouses. In my experience, once the data required for daily analysis approaches 10 GB, standard databases become slow and costly to maintain. Publishers must understand the tradeoffs when they eventually need to migrate to new infrastructure. One final thought: Keep in mind that breaking through these barriers will always require some engineering. Publishers should try to get proactive about aligning the goals of engineering and business teams. The more closely aligned they are, the faster they’ll get more value from their data. Follow Switchboard Software ([@switchboardsoft]() and AdExchanger ([@adexchanger]() on Twitter. © 2018 AdExchanger.com | 41 E. 11th Street, Floor 11 | NYC | 10003 AdExchanger and AdExchanger.com are trademarks or registered trademarks. All rights reserved. [Update your email preferences](

Marketing emails from adexchanger.com

View More
Sent On

13/12/2019

Sent On

20/09/2019

Sent On

03/09/2019

Sent On

26/07/2019

Sent On

26/07/2019

Sent On

23/07/2019

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2024 SimilarMail.