Sunday, October 29, 2017

Turning the page on ereader pagination

Why bother paginating an ebook? Modern websites encourage you to "keep on swiping" but if you talk to people who read ebooks, they rather like pages. I'll classify their reasons into "backward looking" and "practical".

Backward looking reasons that readers like pagination
  • pages evoke the experience of print books
  • a tap to turn a page is easier than swiping
Practical reasons that readers like pagination
  • pages divide reading into easier to deal with chunks
  • turning the page gives you a feeling of achievement
  • the thickness of the turned pages help the reader measure progress
Reasons that pagination sucks
  • sentences are chopped in half
  • paragraphs are chopped in half
  • figures and such are sundered from their context
  • footnotes are ... OMG footnotes!
How would you design a long-form reading experience for computer screens if you weren't tied to pagination? Despite the entrenchment of Amazon and iPhones, people haven't stopped taking fresh looks at the reading experience.

Taeyoon Choi and his collaborators at the School for Poetic Computation recently unveiled their "artistic intervention" into the experience of reading. (Choi and a partner founded the Manhattan-based school in 2013 to help artists learn and apply technology.) You can try it out at http://poeticcomputation.info/



On viewing the first chapter, you immediately see two visual cues that some artistry is afoot. On the right side, you see something that looks like a stack of pages. On the left is some conventional-looking text, and to its right is a some shrunken text. Click on the shrunken text to expand references for the now shrunken main text. This conception of long form text as existing in two streams seems much more elegant than the usual pop-up presentation of references and footnotes in ebook readers. Illustrations appear in both streams, and when you swipe one stream up or down, the other stream moves with it.

The experience of the poetic computation reader on a smartphone adapts to the smaller screen. One or other of the two streams is always off-screen, and little arrows, rather than shrunken images indicate the other's existence.

 * * *

On larger screens, something very odd happens when you swipe down a bit. You get to the end of the "page". And then it starts moving the WRONG way, sideways instead of up and down. Keep swiping, and you've advanced the page! The first time this happened, I found it really annoying. But then, it started to make sense. "Pages" in the Poetic Computation Reader are intentional, not random breaks imposed by the size of the readers screen and the selected typeface. The reader gets a sense of achievement, along with an indication of progress.

In retrospect, this is a completely obvious thing to do. In fact, authors have been inserting intentional breaks into books since forever. Typesetters call these breaks "asterisms" after the asterisks that are used to denote them. They look rather stupid in conventional ebooks. Turning asterisms into text-breaking animations is a really good idea. Go forth and implement them, ye ebook-folx!

On a smart phone, Poetic Computation Reader ignores the "page breaks" and omits the page edges. Perhaps a zoom animation and a thickened border would work.

Also, check out the super-slider on the right edge. Try to resist sliding it up and down a couple of times. You can't!

 * * *

Another interesting take on the reading experience is provided by Slate, the documentation software written by Robert Lord. On a desktop browser, Slate also presents text in parallel streams. The center stream can be thought of as the main text. On the left is the hierarchical outline (i.e. a table of contents), on the right is example code. I like the way you can scroll either the outline or the text stream and the other stream follows. The outline expands and contracts accordion-style as you scroll, resulting in effortless navigation. But Slate uses a responsive design framework, so on a smartphone, the side streams reconfigure into inline figures or slide-aways.

"Clojure by Example", generated by Slate.

There are no "pages" in Slate. Instead, the animated outline is always aware of where you are and indicates your progress. The outline is a small improvement on the static outline produced by documentation generators like Sphinx, but the difference in navigability and usability is huge.

As standardization and corporate hegemony seem to be ossifying digital reading experiences elsewhere,  independent experiments and projects like these give me hope that a next generation of ebooks will put some new wind in the sails of our digital reading journey.

Notes:
  1. The collaborators on the Poetic Computation Reader include Molly Kleiman, Shannon Mattern, Taeyoon Choi and HAWRAF. Also, these footnotes are awkward.


Monday, September 11, 2017

Prepare Now for Topical Storm Chrome 62

Sometime in October, probably the week of October 17th, version 62 of Google's Chrome web browser will be declared "stable". When that happens, users of Chrome will get their software updated to version 62 when they restart.

One of the small but important changes that will occur is that many websites that have not implemented HTTPS to secure their communications will be marked in a subtle way as "Not Secure". When such a website presents a web form, typing into the form will change the appearance of the website URL. Here's what it will look like:

Unfortunately, many libraries, and the vendors and publishers that serve them, have not yet implemented HTTPS, so many library users that type into search boxes will start seeing the words "Not Secure" and may be alarmed.

What's going to happen? Here's what I HOPE happens:
  • Libraries, Vendors, and Publishers that have been working on switching their websites for the past two years (because usually it's a lot more work than just pushing a button) are motivated to fix the last few problems, turn on their secure connections, and redirect all their web traffic through their secure servers before October 17.
          So instead of this:

           ... users will see this:

  • Library management and staff will be prepared to answer questions about the few remaining problems that occur. The internet is not a secure place, and Chrome's subtle indicator is just a reminder not to type in sensitive information, like passwords, personal names and identifiers, into "not secure" websites.
  • The "Not Secure" animation will be noticed by many users of libraries, vendors, and publishers that haven't devoted resources to securing their websites. The users will file helpful bug reports and the website providers will acknowledge their prior misjudgments and start to work carefully to do what needs to be done to protect their users.
  • Libraries, vendors, and publishers will work together to address many interactions and dependencies in their internet systems.


Here's what I FEAR might happen:
  • The words "Not Secure" will cause people in charge to think their organizations' websites "have been hacked". 
  • Publishing executives seeing the "Not Secure" label will order their IT staff to "DO SOMETHING" without the time or resources to do a proper job.
  • Library directors will demand that Chrome be replaced by Firefox on all library computers because of a "BUG in CHROME". (creating an even worse problem when Firefox follows suit in a few months!) 
  • Library staff will put up signs instructing patrons to "ignore security warnings" on the internet. Patrons will believe them.
Back here in the real world, libraries are under-resourced and struggling to keep things working. The industry in general has been well behind the curve of HTTPS adoption, needlessly putting many library users at risk. The complicated technical environment, including proxy servers, authentication systems, federated search, and link servers has made the job of switching to secure connections more difficult.

So here's my forecast of what WILL happen:
  • Many libraries, publishers and vendors, motivated by Chrome 62, will finish their switch-over projects before October 17. Users of library web services will have better security and privacy. (For example, I expect OCLC's WorldCat, shown above in secure and not secure versions, will be in this category.)
  • Many switch-over projects will be rushed, and staff throughout the industry, both technical and user-facing, will need to scramble and cooperate to report and fix many minor issues.
  • A few not-so-thoughtful voices will complain that this whole security and privacy fuss is overblown, and blame it on an evil Google conspiracy.

Here are some notes to help you prepare:
  1. I've been asked whether libraries need to update links in their catalog to use the secure version of resource links. Yes, but there's no need to rush. Website providers should be using HTTP redirects to force users into the secure connections, and should use HSTS headers to make sure that their future connections are secure from the start.
  2. Libraries using proxy servers MUST update their software to reasonably current versions, and update proxy settings to account for secure versions of provider services. In many cases this will require acquisition of a wildcard certificate for the proxy server.
  3.  I've had publishers and vendors complain to me that library customers have asked them to retain the option of insecure connections ... because reasons. Recently, I've seen reports on listservs that vendors are being asked to retain insecure server settings because the library "can't" update their obsolete and insecure proxy software. These libraries should be ashamed of themselves - their negligence is holding back progress for everyone and endangering library users. 
  4. Chrome 62 is expected to reach beta next week. You'll then be able to install it from the beta channel. (Currently, it's in the dev channel.) Even then, you may need to set the #mark-non-secure-as flag to see the new behavior. Once Chrome 62 is stable, you may still be able to disable the feature using this flag.
  5. A screen capture using chrome 62 now might help convince your manager, your IT department, or a vendor that a website really needs to be switched to HTTPS.
  6. Mixed content warnings are the result of embedding not-secure images, fonts, or scripts in a secure web page. A malicious actor can insert content or code in these elements, endangering the user. Much of the work in switching a large site from HTTP to HTTPS consists of finding and addressing mixed content issues.
  7. Google's Emily Schechter gives an excellent presentation on the transition to HTTPS, and how the Chrome UI is gradually changing to more accurately communicate to users that non-HTTPS sites may present risks: https://www.youtube.com/watch?v=GoXgl9r0Kjk&feature=youtu.be (discussion of Chrome 62 changes starts around 32:00)
  8. (added 9/15/2017) As an example of a company that's been working for a while on switching, Elsevier has informed its ScienceDirect customers that ScienceDirect will be switching to HTTPS in October. They have posted instructions for testing proxy configurations.






Monday, August 14, 2017

PubMed Lets Google Track User Searches

CT scan of a Mesothelioma patient.
CC BY-SA by Frank Gaillard
If you search on Google for "Best Mesothelioma Lawyer" and then click on one of the ads, Google can earn as much as a thousand dollars for your click. In general, Google can make a lot of money if it knows you're the type of user who's interested in rare types of cancer. So you might be surprised that Google gets to know everything you search for when you use PubMed, the search engine offered by the National Center for Biotechnology Information (NCBI), a service of the National Library of Medicine (NLM) at the National Institutes of Health (NIH). Our tax dollars work really hard and return a lot of value at NCBI, but I was surprised to discover Google's advertising business is getting first crack at that value!

You may find this hard to believe, but you shouldn't take may word for it. Go and read the NLM Privacy Policy,  in particular the section on "Demographic and Interest Data"
On some portions of our website we have enabled Google Analytics and other third-party software (listed below), to provide aggregate demographic and interest data of our visitors. This information cannot be used to identify you as an individual. While these tools are used by some websites to serve advertisements, NLM only uses them to measure demographic data. NLM has no control over advertisements served on other websites.
DoubleClick: NLM uses DoubleClick to understand the characteristics and demographics of the people who visit NLM sites. Only NLM staff conducts analyses on the aggregated data from DoubleClick. No personally identifiable information is collected by DoubleClick from NLM websites. The DoubleClick Privacy Policy is available at https://www.google.com/intl/en/policies/privacy/
You can opt-out of receiving DoubleClick advertising at https://support.google.com/ads/answer/2662922?hl=en.
I will try to explain what this means and correct some of the misinformation it contains.

DoubleClick is Google's display advertising business. DoubleClick tracks users across websites using "cookies" to collect "demographic and interest information" about users. DoubleClick uses this information to improve its ad targeting. So for example, if a user's web browsing behavior suggests an interest in rare types of cancer, DoubleClick might show the user an ad about mesothelioma. All of this activity is fully disclosed in the DoubleClick Privacy Policy, which approximately 0% of PubMed's users have actually read. Despite what the NLM Privacy Policy says, you can't opt-out of receiving DoubleClick Advertising, you can only opt out of DoubleClick Ad Targeting. So instead of Mesothelioma ads, you'd probably be offered deals at Jet.com

It's interesting to note that before February 21 of this year, there was no mention of DoubleClick in the privacy policy (see the previous policy ). Despite the date, there's no reason to think that the new privacy policy is related to the change in administrations, as NIH Director Francis Collins was retained in his position by President Trump. More likely it's related to new leadership at NLM. In August of 2016, Dr. Patricia Flatley Brennan became NLM director. Dr. Brennan, a registered nurse and an engineer, has emphasized the role of data to the Library's mission. In an interview with the Washington Post, Brennan noted:
In the 21st century we’re moving into data as the basis. Instead of an experiment simply answering a question, it also generates a data set. We don’t have to repeat experiments to get more out of the data. This idea of moving from experiments to data has a lot of implications for the library of the future. Which is why I am not a librarian.
The "demographic and interest data" used by NLM is based on individual click data collected by Google Analytics. As I've previously written, Google Analytics  only tracks users across websites if the site-per-site tracker IDs can be connected to a global tracker ID like the ones used by DoubleClick. What NLM is allowing Google to do is to connect the Google Analytics user data to the DoubleClick user data. So Google's advertising business gets to use all the Google Analytics data, and the Analytics data provided to NLM can include all the DoubleClick "demographic and interest" data.

What information does Google receive when you do a search on Pubmed?
For every click or search, Google's servers receive:
  • your search term and result page URL
  • your DoubleClick user tracking ID
  • your referring page URL
  • your IP address
  • your browser software and operating system
While "only NLM staff conducts analyses on the aggregated data from DoubleClick", the DoubleClick tracking platform analyzes the unaggregated data from PubMed. And while it's true that "the demographic and interest data" of PubMed visitors cannot be used to identify them as  individuals, the data collected by the Google trackers can trivially be used to identify as individuals any PubMed users who have Google accounts. Last year, Google changed its privacy policy to allow it to associate users' personal information with activity on sites like PubMed.
"Depending on your account settings, your activity on other sites and apps may be associated with your personal information in order to improve Google’s services and the ads delivered by Google.
So the bottom line is that Google's stated policies allow Google to associate a user's activity on PubMed with their personal information. We don't know if Google makes use of PubMed activity or if the data is saved at all, but NLM's privacy policy is misleading at best on this fact.

Does this matter? I have written that commercial medical journals deploy intense advertising trackers on their websites, far in excess of what NLM is doing. "Everybody" does it. And  we know that agencies of the US government spend billions of dollars sifting through web browsing data looking for terrorists, so why should NLM be any different? So what if Google gets a peek at PubMed user activity - they see such a huge amount of user data that PubMed is probably not even noticeable.

Google has done some interesting things with search data. For example, the "Google Flu Trends" and "Google Dengue Trends" projects studied patterns of searches for illness - related terms. Google could use the PubMed Searches for similar investigations into health provider searches.

The puzzling thing about NLM's data surrender is the paltry benefit it returns. While Google gets un-aggregated, personally identifiable data, all NLM gets is some demographic and interest data about their users. Does NLM really want to better know the age, gender, and education level of PubMed users??? Turning on the privacy features of Google Analytics (i.e. NOT turning on DoubleClick) has a minimal impact on the usefulness of the usage data it provides.

Lines need to be drawn somewhere. If Google gets to use PubMed click data for its advertising, what comes next? Will researchers be examined as terror suspects if they read about nerve toxins or anthrax? Or perhaps inquiries into abortifactants or gender-related hormone therapies will be become politically suspect. Perhaps someone will want a list of people looking for literature on genetically modified crops, or gun deaths, or vaccines? Libraries should not be going there.

So let's draw the line at advertising trackers in PubMed. PubMed is not something owned by a publishing company,  PubMed belongs to all of us. PubMed has been a technology leader worthy of emulation by libraries around the world. They should be setting an example. If you agree with me that NLM should stop letting Google track PubMed Users, let Dr. Brennan know (politely, of course.)

Notes:
  1. You may wonder if the US government has a policy about using third party services like Google Analytics and DoubleClick. Yes, there is a policy, and NLM appears to be pretty much in compliance with that policy.
  2. You might also wonder if Google has a special agreement for use of its services on US government websites. It does, but that agreement doesn't amend privacy policies. And yes, the person signing that policy for Google subsequently became the third CTO of the United States.
  3.  I recently presented a webinar which covered the basics of advertising in digital libraries in the National Network of Libraries of Medicine [NNLM] "Kernal of Knowledge" series.
  4. (8/16) Yes, this blog is served by Google. So if you start getting ads for privacy plug-ins...
  5. (8/16) urlscan.io is a tool you can use to see what goes on under the cover when you search on PubMed. Tip from Gary Price.

Monday, July 10, 2017

Creative Works *Ascend* into the Public Domain


It's a Wonderful Life, the movie, became a public domain work in 1975 when its copyright registration was not renewed. It had been a disappointment at the box office, but became a perennial favorite in the 80s as television stations began to play it (and play it again, and again) at Christmas time, partly because it was inexpensive content. Alas, copyright for the story it was based on, The Greatest Gift by Philip Van Doren Stern, HAD been renewed, and the movie was thus a derivative work on which royalties could be collected. In 1993, the owners of the story began to cash in on the film's popularity by enforcing their copyright on the story.

I learned about the resurrection of Wonderful Life from a talk by Krista Cox, Director of Public Policy Initiatives for ARL (Association of Research Libraries) during June's ALA Annual Conference. But I was struck by the way she described the movie's entry into the public domain. She said that it "fell into the public domain". I'd heard that phrase used before, and maybe used it myself. But why "fall"? Is the public domain somehow lower than the purgatory of being forgotten but locked into the service of a copyright owner? I don't think so. I think that when a work enters the public domain, it's fitting to say that it "ascends" into the public domain.

If you're still fighting this image in your head, consider this example: what happens when a copyright owner releases a poem from the chains of intellectual property? Does the poem drop to the floor, like a jug of milk? Or does it float into the sky, seen by everyone far and wide, and so hard to recapture?

It is a sad quirk of the current copyright regime that the life cycle of a creative work is yoked to the death of its creator. That seems wrong to me. Wouldn't it be better use the creator's birth date? We could then celebrate an author's birthday by giving their books the wings of an angel. Wouldn't that be a wonderful life?

Monday, June 12, 2017

Book Chapter on "Digital Advertising in Libraries"

I've written a chapter for a book, edited by Peter Fernandez and Kelly Tilton, to be published by ACRL. The book is tentatively titled Applying Library Values to Emerging Technology: Tips and Techniques for Advancing within Your Mission.

Digital Advertising in Libraries: or... How Libraries are Assisting the Ecosystem that Pays for Fake News

To understand the danger that digital advertising poses to user privacy in libraries, you first have to understand how websites of all stripes make money. And to understand that, you have to understand how advertising works on the Internet today.


The goal of advertising is simple and is quite similar to that of libraries. Advertisers want to provide information, narratives, and motivations to potential customers, in the hope that business and revenue will result. The challenge for advertisers has always been to figure out how to present the right information to the right reader at the right time. Since libraries are popular sources of information, they have long provided a useful context for many types of ads. Where better to place an ad for a new romance novel than at the end of a similar romance novel? Where better to advertise a new industrial vacuum pump but in the Journal of Vacuum Science and Technology? These types of ads have long existed without problems in printed library resources. In many cases the advertising, archived in libraries, provides a unique view into cultural history. In theory at least, the advertising revenue lowers the acquisition costs for resources that include the advertising.

On the Internet, advertising has evolved into a powerful revenue engine for free resources because of digital systems that efficiently match advertising to readers. Google's Adwords service is an example of such a system. Advertisers can target text-based ads to users based on their search terms, and they only have to pay if the user clicks on their ad. Google decides which ad to show by optimizing revenue—the price that the advertiser has bid times the rate at which the ad is clicked on. In 2016, Search Engine Watch reported that some search terms were selling for almost a thousand dollars per click. [Chris Lake, “The most expensive 100 Google Adwords keywords in the US,” Search Engine Watch (May 31, 2016).] Other types of advertising, such as display ads, video ads, and content ads, are placed by online advertising networks. In 2016, advertisers were projected to spend almost $75 billion on display ads; [Ingrid Lunden, “Internet Ad Spend To Reach $121B In 2014, 23% Of $537B Total Ad Spend, Ad Tech Boosts Display,” TechCrunch, (April 27, 2014).] Google's Doubleclick network alone is found on over a million websites. [“DoubleClick.Net Usage Statistics,” BuiltWith (accessed May 12, 2017). ]

Matching a user to a display ad is more difficult than search-driven ads. Without a search term to indicate what the user wants, the ad networks need demographic information about the user. Different ads (at different prices) can be shown to an eighteen-year-old white male resident of Tennessee interested in sports and a sixty-year-old black woman from Chicago interested in fashion, or a pregnant thirty-year-old woman anywhere. To earn a premium price on ad placements, the ad networks need to know as much as possible about the users: age, race, sex, ethnicity, where they live, what they read, what they buy, who they voted for. Luckily for the ad networks, this sort of demographic information is readily available, thank to user tracking.

Internet users are tracked using cookies. Typically, an invisible image element, sometimes called a "web bug," is place on the web page. When the page is loaded, the user's web browser requests the web bug from the tracking company. The first time the tracking company sees a user, a cookie with a unique ID is set. From then on, the tracking company can record the user's web usage for every website that is cooperating with the tracking company. This record of website visits can be mined to extract demographic information about the user. A weather website can tell the tracking company where the user is. A visit to a fashion blog can indicate a user's gender and age. A purchase of scent-free lotion can indicate a user's pregnancy. [Charles Duhigg, “How Companies Learn Your Secrets,” The New York Times Magazine, (February 16, 2012).] The more information collected about a user, the more valuable a tracking company's data will be to an ad network.

Many websites unknowingly place web bugs from tracking companies on their websites, even when they don't place adverting themselves. Companies active in the tracking business include AddThis, ShareThis, and Disqus, who provide functionality to websites in exchange for website placement. Other companies, such as Facebook, Twitter, and Google similarly track users to benefit their own advertising networks. Services provided by these companies are often placed on library websites. For example, Facebook’s “like” button is a tracker that records user visits to pages offering users the opportunity to “like” a webpage. Google’s “Analytics” service helps many libraries understand the usage of their websites, but is often configured to collect demographic information using web bugs from Google’s DoubleClick service.  [“How to Enable/Disable Privacy Protection in Google Analytics (It's Easy to Get Wrong!)” Go To Hellman (February 2, 2017).]

Cookies are not the only way that users are tracked. One problem that advertisers have with cookies is that they are restricted to a single browser. If a user has an iPhone, the ID cookie on the iPhone will be different from the cookie on the user's laptop, and the user will look like two separate users. Advanced tracking networks are able to connect these two cookies by matching browsing patterns. For example, if two different cookies track their users to a few low-traffic websites, chances are that the two cookies are tracking the same user. Another problem for advertisers occurs when a user flushes their cookies. The dead tracking ID can be revived by using "fingerprinting" techniques that depend on the details of browser configurations. [Gunes Acar, Christian Eubank, Steven Englehardt, Marc Juarez, Arvind Narayanan, and Claudia Diaz, “The Web Never Forgets: Persistent Tracking Mechanisms in the Wild.” In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS '14). ACM, New York, NY, USA, 674-689. DOI] Websites like Google, Facebook, and Twitter are able to connect tracking IDs across devices based on logins. 

Once a demographic profile for a user has been built up, the tracking profile can be used for a variety of ad-targeting strategies. One very visible strategy is "remarketing." If you've ever visited a product page on an e-commerce site, only to be followed around the Internet by advertising for that product, you've been the target of cookie-based remarketing.

Ad targeting is generally tolerated because it personalizes the user's experience of the web. Men, for the most part, prefer not to be targeted with ads for women’s products. An ad for a local merchant in New Jersey is wasted on a user in California. Prices in pounds sterling don't make sense to users in Nevada. Most advertisers and advertising networks take care not to base their ad targeting on sensitive demographic attributes such as race, religion, or sexual orientation, or at least they try not to be too noticeable when they do it.

The advertising network ecosystem is a huge benefit to content publishers. A high traffic website has no need of a sales staff—all they need to do is be accepted by the ad networks and draw users who either have favorable demographics or who click on a lot of ads. The advertisers often don't care about what websites their advertising dollars support. Advertisers also don't really care about the identity of the users, as long as they can target ads to them. The ad networks don't want information that can be traced to a particular user, such as email address, name or home address. This type of information is often subject to legal regulations that would prevent exchange or retention of the information they gather, and the terms of use and so-called privacy policies of the tracking companies are careful to specify that they do not capture personally identifiable information. Nonetheless, in the hands of law enforcement, an espionage agency, or a criminal enterprise, the barrier against linking a tracking ID to the real-world identity of a user is almost non-existent.

The amount of information exposed to advertising networks by tracking bugs is staggering. When a user activates a web tracker, the full URL of the referring page is typically revealed. The user's IP address, operating system, and browser type is sent along with a simple tracker; the JavaScript trackers that place ads typically send more detailed information.  It should be noted that any advertising enterprise requires a significant amount of user information collection; ad networks must guard against click-jacking, artificial users, botnet activity and other types of fraud. [Samuel Scott, “The Alleged $7.5 Billion Fraud in Online Advertising,” Moz, (June 22, 2015).]

Breitbart.com is a good example of a content site supported by advertising placed through advertising networks. A recent visit to the Breitbart home page turned up 19 advertising trackers, as characterized by Ghostery: [Ghostery is a browser plugin that can identify and block the trackers on a webpage.]
  • 33Across
  • [x+1]
  • AddThis
  • adsnative
  • Amazon Associates
  • DoubleClick
  • eXelate
  • Facebook Custom Audience
  • Google Adsense
  • Google Publisher Tags
  • LiveRamp
  • Lotame
  • Perfect Market
  • PulsePoint
  • Quantcast
  • Rocket Fuel
  • ScoreCard Research Beacon
  • Taboola
  • Tynt

While some of these will be familiar to library professionals, most of them are probably completely unknown, or at least their role in the advertising industry may be unknown. Amazon, Facebook and Google are the recognizable names on this list; each of them gathers demographic and transactional data about users of libraries and publishers. AddThis, for example, is a widget provider often found on library and publishing sites. They don't place ads themselves, but rather, they help to collect demographic data about users. When a library or publisher places the AddThis widget on their website, they allow AddThis to collect demographic information that benefits the entire advertising ecosystem. For example, a visitor to a medical journal might be marked as a target for particularly lucrative pharmaceutical advertising.

Another tracker found on Breitbart is Taboola. Taboola is responsible for the "sponsored content" links found even on reputable websites like Slate or 538.com. Taboola links go to content that is charitably described as clickbait and is often disparaged as "fake news." The reason for this is that these sites, having paid for advertising, have to sell even more click-driven advertising. Because of its links to the Trump Administration, Breitbart has been the subject of attempts to pressure advertisers to stop putting advertising on the site.  A Twitter account for "Sleeping Giants" has been encouraging activists to ask businesses to block Breitbart from placing their ads. [Osita Nwanevu, “‘Sleeping Giants’ Is Borrowing Gamergate’s Tactics to Attack Breitbart,” Slate, December 14, 2016.] While several companies have blocked Breitbart in response to this pressure, most companies remain unaware of how their advertising gets placed, or that they can block such advertising. [Pagan Kennedy, “How to Destroy the Business Model of Breitbart and Fake News,” The New York Times (January 7, 2017).] 

I'm particularly concerned about the medical journals that participate in advertising networks. Imagine that someone is researching clinical trials for a deadly disease. A smart insurance company could target such users with ads that mark them for higher premiums. A pharmaceutical company could use advertising targeting researchers at competing companies to find clues about their research directions. Most journal users (and probably most journal publishers) don't realize how easily online ads can be used to gather intelligence as well as to sell products.

It's important to note that reputable advertising networks take user privacy very seriously, as their businesses depend on user acquiescence. Google offers users a variety of tools to "personalize their ad experience." [If you’re logged into Google, the advertising settings applied when you browse can be viewed and modified.] Many of the advertising networks pledge to adhere to the guidance of the "Network Advertising Initiative" [The NAI Code and Enforcement Program: An Overview,”],  an industry group.  However, the competition in the web-advertising ecosystem is intense, and there is little transparency about enforcement of the guidance. Advertising networks have been shown to spread security vulnerabilities and other types of malware when they allow JavaScript in advertising payloads. [Randy Westergren, “Widespread XSS Vulnerabilities in Ad Network Code Affecting Top Tier Publishers, Retailers,” (March 2, 2016).]

Given the current environment, it's incumbent on libraries and the publishing industry to understand and evaluate their participation in the advertising network ecosystem. In the following sections, I discuss the extent of current participation in the advertising ecosystem by libraries, publishers, and aggregators serving the library industry.

Publishers

Advertising is a significant income stream for many publishers providing content to libraries. For example, the Massachusetts Medical Society, publisher of the New England Journal of Medicine, takes in about $25 million per year in advertising revenue. Outside of medical and pharmaceutical publishing, advertising is much less common. However, advertising networks are pervasive in research journals.

In 2015, I surveyed the websites of twenty of the top research journals and found that sixteen of the top twenty journals placed ad network trackers on their websites. [“16 of the Top 20 Research Journals Let Ad Networks Spy on Their Readers,” Go To Hellman (March 12, 2015). ]
Recently, I revisited the twenty journals to see if there had been any improvement. Most of the journals I examined had added tracking on their websites. The New England Journal of Medicine, which employed the most intense reader tracking of the twenty, is now even more intense, with nineteen trackers on a web page that had "only" fourteen trackers two years ago. A page from Elsevier's Cell went from nine to sixteen trackers. [“Reader Privacy for Research Journals is Getting Worse,” Go To Hellman (March 22, 2017). ] Intense tracking is not confined to subscription-based health science journals; I have found trackers on open access journals, economics journals, even on journals covering library science and literary studies.

It's not entirely clear why some of these publishers allow advertising trackers on their websites, because in many cases, there is no advertising. Perhaps they don’t realize the impact of tracking on reader privacy. Certainly, publishers that rely on advertising revenue need to carefully audit their advertising networks and the sorts of advertising that comes through them. The privacy commitments these partners make need to be consistent with the privacy assurances made by the publishers themselves. For publishers who value reader privacy and don't earn significant amounts from advertising, there's simply no good reason for them to continue to allow tracking by ad networks.

Vendors

The library automation industry has slowly become aware of how the systems it provides can be misused to compromise library patron privacy. For example, I have pointed out that cover images presented by catalog systems were leaking search data to Amazon, which has resulted in software changes by at least one systems vendor. [“How to Check if Your Library is Leaking Catalog Searches to Amazon,” Go To Hellman (December 22, 2016).] These systems are technically complex, and systems managers in libraries are rarely trained in web privacy assessment. Development processes need to include privacy assessments at both component and system levels.

Libraries

There is a mismatch between what libraries want to do to protect patron privacy and what they are able to do. Even when large amounts of money are at stake, there is often little leverage for a library to change the way a publisher delivers advertising bearing content. Nonetheless, together with cooperating IT and legal services, libraries have many privacy-protecting options at their disposal. 
  1. Use aggregators for journal content rather than the publisher sites. Many journals are available on multiple platforms, and platforms marketed to libraries often strip advertising and advertising trackers from the journal content. Reader privacy should be an important consideration in selecting platforms and platform content.
  2. Promote the use of privacy technologies. Privacy Badger is an open-source browser plugin that knows about, and blocks tracking of, users. Similar tools include uBlock Origin, and the aforementioned Ghostery.
  3. Use proxy-servers. Re-writing proxy servers such as EZProxy are typically deployed to serve content to remote users, but they can also be configured to remove trackers, or to forcibly expire tracking cookies. This is rarely done, as far as I am aware.
  4. Strip advertising and trackers at the network level. A more aggressive approach is to enforce privacy by blocking tracker websites at the network level. Because this can be intrusive (it affects subscribed content and unsubscribed content equally) it's appropriate mostly for corporate environments where competitive-intelligence espionage is a concern.
  5. Ask for disclosure and notification. During licensing negotiations, ask the vendor or publisher to provide a list of all third parties who might have access to patron clickstream data. Ask to be notified if the list changes. Put these requests into requests for proposals. Sunlight is a good disinfectant.
  6. Join together with others in the library and publishing industry to set out best practices for advertising in web resources.

Conclusion

The widespread infusion of the digital advertising ecosystem into library environments presents a new set of challenges to the values that have been at the core of the library profession. Advertising trackers introduce privacy breaches into the library environment and help to sustain an information-delivery channel that operates without the values grounding that has earned libraries and librarians a deep reserve of trust from users. The infusion has come about through a combination of commercial interest in user demographics, consumer apathy about privacy, and general lack of understanding of a complex technology environment. The entire information industry needs to develop understanding of that environment so that it can grow and evolve to serve users first, not the advertisers.