<![CDATA[M ADAMSON ASSOCIATES - Blog]]>Sun, 19 Feb 2023 15:01:52 -0500Weebly<![CDATA[When Is It Transformative and Why Does It Matter?]]>Wed, 01 Feb 2023 05:00:00 GMThttp://madamsonassociates.com/blog/when-is-it-transformative-and-why-does-it-matter
Published as Feature Article in NISO I/O 2/1/2023                       
When Is It Transformative?
File Size:449 kb
File Type:pdf
Download File

"Transformation: a dramatic change in form or appearance,
extreme radical change."
     With the word “transformation” in open access and open scholarship, context is everything. What is transformative for a publisher may not be to the same degree for the academic research library, even when reaching for similar goals.  This lack of clarity can be confusing without context. This also impacts expectations for how resources and budgets should be used to support open access and open scholarship. So, what’s in a word? Different visions of the future, or potential roles in advancing the goals of open scholarship. This article explores several examples of use and misuse of this word with the recommendation to use sparingly and appropriately and replace with more accurate terminology when available.  
A Matter ofPerspective 
     When the word “transformative” entered the scholarly publishing vocabulary, it was to encourage a switch to open access business models by publishers, especially large commercial publishers of scholarly research journals.  Other players were seen as participants in their transformation. These differences of perspective seem baked into the first usages of the word by Open Access 2020 and later by cOAlition S.  The focus was on eliminating or reducing behind-paywall subscriptions, especially by large publishers. This assumed library materials budgets could be repurposed to offset a transformative shift by publishers to open access.
     To avoid confusion, Jisc prefers to call these “transition agreements.”  It is more accurate to say funders, publishers, academic institutions, research libraries are all in transition towards new types of business models and infrastructure support for research and scholarship. 
     Perhaps the real transformation for large publishers is from publishing companies to publishing and platform companies offering software and services directly to funders, authors, and administrators in addition to libraries.  While a great topic, the focus of this review is on the impact on libraries and their budgets since cOAlition S saw this as a major source of funding for open access publishing.
     Transformation in the academic setting goes well beyond agreements with publishers and repurposing library subscription budgets. There is a far bigger “transformative” view of materials acquisition, researcher outputs and open scholarship to manage.  
  1. Collection analysis has become more complex, sophisticated, and necessary.
  2. Libraries are managing multiple types of business models to provide materials and resources from many types of sources. 
  3. This rigorous assessment of current and future spending also involves funding new types of services supporting open scholarship that libraries are well positioned to provide.
     Changes are a given. Transformative changes are also likely, context sensitive, and multi-faceted.  The transformation of publishers to open access business models is only one small part of the transformation of the research institution to open scholarship. For academic libraries, even a flat budget does not necessarily mean static. How research institutions and libraries use their resources and budgets to meet new demands are also changing with the landscape. 
“S” is for Shock 
Open Access 2020 Initiative first mentioned “to transform the current subscription publishing system, an obsolete legacy of the print era, to new open access publishing models that ensure articles are open and re-usable,” stating that the focus was on changing subscription publishing business models.  To further encourage open access publishing, cOAlition S secured the backing of major funders for those in compliance with cOAlition S requirements. The umbrella term, “transformative arrangements,” was adopted to include transformative agreements, transformative model agreements, and transformative journals. 
     Open Access 2020 and cOAlition S can be credited with jumpstarting change by shocking the publishing industry into accelerating the adoption of OA business models and fostering what we see as almost daily announcements of new licensing arrangements for journals, as covered in survey by OASPA.
     In a recent example of transformative change in publishing compliant with Plan S definitions, IEEE recently announced flipping their business model. In the spirit of transparency as part of this change, they have also published the status of each of their journals and targets for compliance for all to see and review (IEEE Transformative Journals Targets for 2023)
     Another example of transparency is ACM, with cost and customer analysis used and shared as a prerequisite for moving towards a new business model of published flat fee prices for unlimited access. ACM has shared a blueprint of how to make this transition far beyond adopting a new business model. See presentation by Scott Delman as part of a CHORUS webinar on “Making the Future of Open Research Work,” April 23, 2021. 
Evolving Business Models: To Be Continued
     The easiest way to create a new open access publishing model was to use the article processing charges (APC’s), pricing OA at the article level. In his white paper, It’s not Transformative if Nothing Changes,” Dr. Frederick Fenter analyzes how APC’s can be used to achieve the same publishing profits, market share of traditional publishers, with some discussion of the greater impact of some open access publications at lower prices.
     APC’s are acknowledged for their shortcomings, with other business models to emerge like those described in the previously referenced CHORUS webinar. The APC is still behind the scenes as a potential default business model for services or as an imperfect underlying unit of measure for assumptions and calculations of cost structures, with the emphasis on the word “assumptions."
What Are the Goals?
     For the large publisher, this may represent a satisfactory switch in business models that preserves and may even grow revenues. Or, the goal may simply be moving towards open access publishing with sustainable and predictable business models. For the small publisher, these changes may represent a competitive disadvantage. For the library, have they lowered subscription costs or are they now paying subscription plus open access? For the scholarly community, is there expanded support for open scholarship workflows beyond traditional publishing?  In the United States, new agreements are more complex based on different goals at each institution and with each publisher.
 A Tale of Two Deals
     If you ask an academic librarian what they think of “transformative agreements,” it is doubtful there will be resounding enthusiasm for the word. Even if there are a growing number of agreements with new open access business modelling. For the larger research libraries, the idea of flipping models that could cost them even more is untenable. Solutions have resulted in even more agreement complexity. Sometimes these can look more like complex versions of a “Big Deal 2.0” but represent useful experimentation. 
     In 2021, California Digital Libraries reached an agreement with Elsevier that could be called “transformative” by cOAlition S standards, flipping the business model to primarily OA, if with refinements, covered in the Memorandum of Understanding. Excellent discussions were published in Scholarly Kitchen, Lisa Hinchcliffe’s post, The Biggest Big Deal March 16, 2021 and Rick Anderson’s Six Questions (March 25, 2021).
     The UC agreement is closer to a cOAlition S transformative agreement because the APC fees are used as a unit of calculation that does not stand alone but pays for access to other content without a subscription (with extra for backfile access). With the details covered well elsewhere, we tease out one interesting thread from this landmark agreement as a multi-payer model, which is the library involvement in APC-like funding by agreeing to pay the first $1,000:
  • This solves the problem of fees for APC’s exceeding materials budgets for a major research institution with significant prestige publishing while providing access to all content.
  • The agreement limits support for those with grants to cover costs and covers more for those who cannot afford.
  • It keeps the library as a partner / player in the eyes of the administration and faculty in the fuller scope of publisher agreements.
  • Library involvement means easy collection of statistics on open access publishing and fees institution wide.
     Another equally complex and interesting new type of agreement that is more traditional, not truly a transformative agreement, is the one between the Texas Library Coalition for United Action and Elsevier.  Susan D’Agostino, technology and innovation reporter for “Inside Higher Ed” provides an excellent review of the deal in her article, “Is a Deal Between 44 Texas Colleges and Elsevier ‘Historic?’” (December 9, 2022). This agreement includes lowering costs of subscriptions, estimated at $4.75 Million annually, a cap of annual increases at 2%. Like the University of California agreement, there is a discount of 15% on APC charges, with the exception of 10% for The Lancet or Cell Press. While there is a discount for APC charges, unlike UC this is not applied to the subscription component.
     While this is still essentially a subscription plus agreement, it offers a new type of roadmap for negotiations and structuring relationships between academic institutions and publishers to study over time, delivering on lower costs for libraries and other perks like the pilot on copyright reverting to the authors. The libraries may pay less in subscriptions while Elsevier also collects discounted APC fees for open access, mostly paid directly by funders through grants, researchers, departments, or administration.  Taking the APC charges into account, spending that may have once been concentrated in the libraries materials budgets is now distributed. 
     Both agreements provide useful experimentation and point to the increased need for ongoing analysis, especially if Elsevier will provide Texas with annualized data on all institutional open access publishing and APC’s so the libraries would have a more complete picture of open access spending at their institutions.   
 New Library Roles in Open Scholarship
     While these changes are exciting new developments, the institutional and library changes cut closer to the heart of a deeper transformation that goes far beyond the published article of record, including the wider scope of changes in how research is conducted, documented, disseminated. Funding these changes will continue to include increased analysis, experimentation, adjustments and growth by all players, publishers and academic institutions.
     Libraries and their academic institutions have roles in also supporting published and unpublished scholarly outputs. Sami Benchekroun, Co-Founder of Morrissier, has said only 9% of research presented at conferences is published. (“The Potential of Expanding the Research Lifecycle Through Digital Transformation and Cultural Disruption,” January 19, 2023)
     A survey by the Association of Research libraries (ARL) of open access spending is useful because the scope is much broader, reflecting the full range of library funded activities beyond agreements with publishers. ARL surveyed US member libraries in May-June 2022. (Hudson-Vitale & Ruttenberg, “Investments in Open: Association of Research Libraries US University Member Expenditures on Services, Collections, Staff, and Infrastructure in Support of Open Scholarship.”)
     It suggests the value of new ways of evaluating spending and data collection to reveal a more complete picture of library-supported spending on open scholarship and how this is changing.  While ambitious, this study does not include open education resources, research data, or membership in advocacy organizations. The breakdown from the survey includes:
​     Read-and-Publish (Transitional)                   64%
     Institutional Repositories                             18%
     Non-APC based                                           10%
     APC’s                                                          4%
     Open Access Journal Publishing and Hosting   4%
     Note: ARL also favors using the word “transitional” to “transformational.” The first category includes “publish-and-read,” transformational, etc.
     We appreciate the ambitious nature of this survey and difficulties collecting data.  One would hope ARL continues to provide this survey, refining and expanding scope.  For instance:
  • Further breakdown of the first category would be useful. 
  • Since this survey focuses on library budgets, it does not reflect a key data point of APC charges paid for by other departments. It would create a more complete picture if libraries could request this data from publishers when they negotiate new agreements to form a more accurate picture of open access publishing for the entire institution. 
     Zooming out another layer, a 2022 NISO Roundtable on the Library Role in the Research Process highlighted new and changing roles for libraries in a digital landscape. Libraries are involved in facilitating, creating, and managing research outputs far beyond articles and books published by third parties. Within academic institutions, lines are blurring, with new roles and collaborations between libraries, IT departments, and administrative functions supporting open scholarship. As examples, there are new working relationships between IT offering high- speed computing and other resources, administration managing grants and faculty exposure, librarians helping to develop data management plans for grants, facilitating and sometimes creating research outputs like multimedia objects, data visualization, and hosting digital scholarship and library publishing. As an example, Sayeed Choudhury, Carnegie Mellon, highlighted roles in supporting not just open data but open-source software. 
     Within this context, libraries are revisiting priorities, changing staffing requirements, and changing perceptions of administration. The landscape is not static for the major players with new types of services, collaboration, and skill sets. There is the need to experiment, analyze and document what is working and what is not.  In addition, while slow to change, we think it likely that there will be changes in how researchers and their outputs are evaluated. When and if that happens, systems that support their full range of outputs will be “a very big deal.”
     There is change and transformation to new roles, relationships, and services supporting open scholarship. The library / publishing services of the future will look different. But definitions and context matters, especially when it affects value to the scholarly community and setting priorities for funding and budgeting.  
     The desire to shock the publishing industry has accelerated adoption of open access business models.  Because of cOAlition S, the expression “transformative arrangements” is likely to have some persistence, even if a misnomer. One can see tacit acknowledgement of this by how often the word, “transformative” is put in quotes as if to say “the so-called transformative….” Others have switched to clarifying by calling these changes “transitional,” or “publish-and read,” etc. Otherwise, one could ask the question, “What do you think of transformative agreements?” The response might be “What do you mean by that, how transformative is it? Or, are these new business models working for us?” 
     Where it becomes an issue is when one aspect of the changing dynamics of open scholarship is mistaken for the whole. It assumes library materials budgets are static sources of funds for one purpose instead of part of a larger budget for services with changing demands to acquire resources, open and subscription, for their users and to fund new demands for open scholarship services. 
Still, there are many new and exciting changes and a sense there is “no going back” to previous ways of creating research outputs or doing business. 
  • Eventual changes in how researcher output is evaluated and rewarded are possible.
  • Agreements are likely to continue to be complex, involving experimentation and monitoring.
  • More detailed analysis of resources, business models, subscriptions and services is increasingly important, including services like Delta Think Open Access Data & Analytics Tool, Unsub, etc. to inform decision making. 
  • Libraries could / should include requests for any APC funding at their institutions as part of new agreements to provide a more complete picture of open access spending.
  • Roles and relationships between funders, publishers, administrators, IT departments and libraries and the skills required to offer services are likely to continue to evolve. 
     Adopting the Jisc language to refer to changes in publishing to open access business models as more of a transition than a transformation seems like an easy solution to “the language problem.” Publishers and libraries are finding new ways to work together to provide resources and collaborative support for scholarship. The transformative aspects of research institutions towards open scholarship and libraries supporting new types of scholarly output, new potential rewards systems and services suggests only the beginning of changes defining the future landscape and the roles all players will have in shaping this future.

©2023 M Adamson Associates   
<![CDATA[The High Cost of Context Switching]]>Tue, 14 May 2019 20:07:18 GMThttp://madamsonassociates.com/blog/high-cost-of-context-switching
High Cost of Context Switching
File Size: 786 kb
File Type: pdf
Download File

A study by Strayer & Watson of the University of Utah suggested only 2% of the population can multitask successfully (2010). Everyone else is going through the toll lane and paying the costs of context switching. Realization, a software company, estimated this cost to global business at $450 Billion/year (2013). This means there are hidden competitive advantages for people and organizations who adopt strategies to minimize this.
     The above is a standalone chart from my recent NISO Webinar, “Managing Change with Project Management Skills.” Notice the dates. The cost of context switching is not news, but there is increasing awareness of how this manifests.
     Since Weinberg wrote his book, there have been many additional studies exploring multitasking concluding it is largely a myth. What appears to be multitasking is simply switching between two or more tasks, not giving full attention to any. We now know more about how apparent multitasking negatively affects IQ, brain development, and creative thinking.
     There are distinctions between three major types:
  1. Low level tasks (we can walk and chew gum at the same time).
  2. Media multitasking (it’s not working).
  3. Higher level conceptual work (slows time to completion; impairs creative problem solving and quality).
     A study by Strayer & Watson of the University of Utah suggested only 2% of the population can multitask successfully (Supertaskers, 2010). Everyone else is going through the toll lane and paying the costs of context switching. A study by Realization, a project management software company, estimated this cost to global business at $450 Billion / year (Effects of Multitasking on Organizations, 2013). This means there are hidden competitive advantages for people and organizations who adopt strategies to minimize this.

Personal Level

     The media is full of time management solutions to deal with media multitasking distractions at a personal level. Best practices can include things like limiting checking email to three times a day, turning off telephones or putting them in “Do Not Disturb” (emergency access only), and turning off notifications for personal reasons or because we are concentrating on writing a blog post. Since we all have multiple competing priorities, results improve when we work on one thing at a time with minimal distractions, leave notes of where we left off and what to do next. When we pick it up again, it will take less time to continue. 

Organizational Level

      Project management skills also offer practical and pragmatic ways to implement changes across an organization to minimize the costs of context-switching. Awareness of the cost makes it even more important for management to prioritize and clearly define high-value opportunities. “Less is more” with impact on staff performance and the desire to maximize successful development and time to market. 
     The first step, and the one where our consulting services often come into play, is using internal and external information to evaluate options and opportunities for decision-makers. Once projects are selected for further review and potential implementation, clearly defining objectives and scope and engaging in a disciplined breakdown of features and/or tasks to accomplish objectives is necessary to establish priorities, estimate time and required resources, and coordinate how team members work together.
     This same project planning discipline also makes it easier to reduce context switching costs. As just one example, when complex tasks are broken down, it’s easier for team members to focus on the highest value priorities and complete them within manageable time frames before picking up the next task or module.
     The Agile approach takes this a few steps further by recognizing the benefits of assigning team members to one project at a time. There are other reasons for this approach but reducing need for employees to have to have to choose between multiple competing priorities is baked into this style. The principles are also active within projects, prioritizing high-value features to work on sequentially with incremental deliverables. 
     Managers have the same challenges they have always had to prioritize work. Context switching costs add reasons to either engage in projects sequentially or assign resources differently. Whether personal or organizational, one of the best ways to get more done in less time is to recognize the myth of multitasking, the true costs of context switching, and adopt work habits to counteract both.

Related Studies

  •  2005. University of London study. Email & phone call distractions drop in IQ.
  • 2006. Russell Poldrack, UCLA. Multitasking adversely affects learning, especially tasks requiring more attention.
  • 2009 Stanford University.  Heavy multitaskers mentally less organized, hard time differentiating relevant from irrelevant details.
  • 2013. Realization study: Organizational multitasking costs global businesses $450 Billion each year.  https://www.prnewswire.com/news-releases/study-organizational-multitasking-costs-global-businesses-450-billion-each-year-221154011.html
  • 2014 Kep Kee Loh, University of Sussex, high multitaskers may have less brain density.
  • 2017 Strayer The Myth of Multitasking. Easy reading recap.
  • 2018 Uncapher & Wagner, Stanford University, Minds and brains of media multitaskers: Current findings and future directions. PNAS.  Scholarly research review.
<![CDATA[Unlocking the Treasure Trove with Inspec Analytics]]>Thu, 25 Apr 2019 10:00:00 GMThttp://madamsonassociates.com/blog/unlocking-the-treasure-trove-with-inspec-analytics
Unlocking the Treasure Trove
File Size: 1781 kb
File Type: pdf
Download File

Like cracking the code of semantic technologies and linked data, Inspec Analytics seems like the perfect fit for an A&I service with depth of coverage, rich scientific metadata and loaded with value for the users, value for the customers, and value for the organization.
Organizational Comparison from Inspec Analytics User Guide
      Many have heard the siren’s call of linked data and the semantic technologies since first introduced in the 1990’s, only to be dashed on the rocks of the practical realities of implementation or needing to do a serious recalibration of approach.  The vision of a semantic web with hyperdata links as ubiquitous as document hyperlinks is appealing, but Tim Berners-Lee’s vision may not be realized as he first envisioned it.  However, adaptations of his dream are finding their way into early applications in financial services, healthcare and pharmaceuticals (Astra Zeneca), retail (eBay chatbot), enterprise applications (used for providing business insights, predictive modelling, repurposing and reusing content), and knowledge graphs like Google and Wikipedia. [For Google-watchers, see also Google’s recent patent profiled in OntoSpeak.]
“Semantic Technologies will continue to see steady growth and adoption but will likely never be the rallying flag on their own. I think we will continue to appreciate Semantic Technologies as an infrastructure play, in service to broader needs such as Artificial Intelligence, Machine Learning, or data interoperability. Semantic Technologies will come to assume their natural role as essential enablers, but not as keystones on their own for major economic and information change.” 
                            Michael Bergman quoted in
                            Semantic Web and Semantic Technology Trends 2018

      In publishing, there are intriguing initiatives like Inspec Analytics and Springer Nature’s SciGraph (not covered here).  In libraries, OCLC completed and published results of the third International Linked Data Survey in December 2018. Results suggest development is mostly experimental. This revealing survey, led by Karen Smith-Yoshimura and the OCLC Research Library Partnership team, includes survey results from 2014, 2015 and 2018, with insights into such projects – how respondents view measures of success, obstacles encountered, and lessons learned. 
The appeal for publishers and libraries of flexible data models is strong:
  • Improved discovery and interoperability across disparate sources and types of content
  • New types of uses for content, including just the metadata
  • Easier to perform analysis, run reports
  • Creates a direct high value conversation with the customer and new classes of users
  • Competitive advantages
  • Visualizations enabling the ability to ‘walk the graph’
     Yet there can be a combination of reasons linked data projects are not pursued or put on hold.  Several publishers have indicated no new immediate revenue streams to offset significant investment.  For libraries, respondents to the OCLC survey repeatedly cited “requires more staff and stakeholder buy-in.”  There are the challenges of large indexing projects working with ambiguous vocabularies and unclear objectives.  While the capabilities might appeal, applications are likely to languish without an easy to use front end that mines the potential.  Additional barriers to adoption to overcome include a low-level query language and huge previous investments in relational databases. 
     With this as context, the presentation of Inspec Analytics by Vincent Cassidy, Director of Academic Markets, at the February 2019 NFAIS meeting stands out as a compelling initiative because seems to deliver a pragmatic and substantive set of offerings.  Like cracking the code, it seems like the perfect fit for an A&I service with depth of coverage, rich scientific metadata and loaded with value for the users, value for the customers, and value for the organization. Intrigued by Inspec Analytics, I followed up with an interview and demo with Tim Aitken, Senior Product Manager, reflected in this piece.

 So What is Linked Data?
     In semantic web terminology and for the uninitiated, linked data is used to describe a method of exposing and connecting data [often factual content] on the web from different sources.  The web uses hyperlinks that allow people to move from one document to another.  Linked data uses hyperdata links to do something similar, i.e. Barack Obama ... attended ... Columbia University, or the University of Toronto ... publishes ... ‘x’ articles ... on bioengineering. You can extract some of this information but it is far easier using linked data where the relationships are already created.  It makes it easier for computers to make sense out of information by showing clearly defined relationships and then also link this information across different sources and types of content.  Once these relationships have been established, using the information depends on how it is accessed and served up. This includes how the linked data is searched and how it is analyzed and presented via a graphical user interface (GUI).  
“These storehouses of semantic relationships are often referred to as ‘knowledge graphs’ (the term ‘graph’ is about relationships, not visualization) or ‘triple stores’ (a ‘triple’ is a subject-predicate-object relationship). The power of a knowledge graph or triple store is that it enables you to infer relationships. You can ask it questions and it can give you answers that aren’t explicitly stored in the data. In effect, it isn’t ‘finding answers,’ it’s ‘figuring out answers.’ This is powerful!”
            Bill Kasdorf, Kasdorf & Associates

     Traditional search matches words.  This could be described as the “is” or “is not” of a traditional search.  Semantic technology together with linked data adds another layer of meaning. It adds more ‘verbs’ (predicates) like “attended,” “works at,” “is married to.”  Suddenly there are many more relationships than “is” or “is not.” These relationships are machine-readable and open the possibility to apply inference engines.  For instance:
          Fact one: Millie graduated from Stanford University.
          Fact two: Stanford University is an accredited US institution.
          Inferred fact: Millie graduated from an accredited US institution.
     While not terrifically exciting at the level of this example, the more data available for analysis, the richer and more accurate the inference results. 

The Layered Look
     How do you recognize that semantic technologies are being used as part of a product?  How “smart” is a potential resource? 
     It may not affect the quality of the product offerings, but it may define attributes related to speed, efficiency, interoperability, and flexibility for future development.  For instance, Dimensions says they have “linked research data.” That could mean more than one thing and is useful to explore to understand the full capabilities.

     In what ways is machine learning integrated (or not)?  What types of data analytics are employed?
  • Visualizations or analytics do not require semantic technologies.  Analytics and reporting can sit on top of a relational database, without the linked data. Several commercial databases seem to do just that, adding an analytic layer on top of a relational database. 
  • Linked data does not require a relational database.  If available, it offers a useful foundation for a hybrid approach.
  • A resource with linked data may have a usable but not particularly user-friendly interface.  The power is unleashed as usable infrastructure.
     One assumes if a publisher is using semantic technologies, they may want bragging rights and want the user to know they have taken that approach, because the underlying potential has advantages that can be more easily exploited. With increasing emphasis on AI and machine-learning as part of the scholarly publishing conversation, it makes sense to look under the hood to see what this means and how this impacts resource development and use. 

     Potential (discrete) components of a modern reference database. All may not be present:

     This illustrates the concept of layering discrete components ‘on top of each other’ to provide a service that may or may not include semantic technologies, depending on how many layers are implemented as part of the product design and build. 

This diagram is purposefully not the technical architecture, of which there are abundant illustrations elsewhere. 

 Working from the bottom up:
  • Traditional relational database content and taxonomies. At the base is the foundational relational database and the classification / controlled vocabularies. This, plus search and a user interface represents a traditional A&I service.
  • Linked data. The next layer up represents a new level of indexing with linked data (for simplicity, we’ve grouped RDF here to indicate a package).
  • Ontologies describe the meaning of relationships between terms, i.e. more verbs like ”is a subspecies of” or  “attends” or “is married to.” We’ve separated this from taxonomies due to how it may be employed. 
  • Search exists in traditional and semantic variations.  For instance, semantic search requires different search capabilities, i.e. Not Only SQL and SPARQL.  To see what this looks like if unfamiliar, this is a SPARQL search on Wikidata.
  • Finally, the top two layers include two separate components that can appear to be one:
    • a user-friendly interface,
    • the ability to run analytics, visualizations, and reports, retrieving metadata from the linked data tables and relational database.
     This is presented as a filter for useful inquiry into understanding the product from a customer / user perspective in evaluating a resource.  More about the specific technical architectures is beyond the scope of this piece.

Inspec Analytics: Exposing New Value
     For A&I services, excellence in discovery with high precision and recall is a strength. Discovery is enhanced with the additional linked data. Discovery is important but it is only one capability.  With Inspec Analytics, the metadata itself has new life and value, offering ways to use just the metadata for institutional profiles, significant research into who is publishing what and where, or identifying other researchers for collaboration.  This can be done at levels of extreme and flexible granularity and multiple views. Users are asking questions of the metadata itself like:
  • What is the research output from my specific institution for a particular field?
  • How am I connected to the leading authors in a particular field?
  • In which journals have my peers published?
  • Who are we collaborating with, or could collaborate with?
This is exposing new relationships with the content and services:
  • There are new types of uses by different members of the user community.
  • Searches represent a high priority by faculty and administration.
  • Inspec is experiencing increased engagement with customers at a new level of value.
  • Calls from customers include additional ideas and requests that either flow into new features or have the potential to lead to new business opportunities.

Measuring Success
     Early measurements of success include statistics that show they are using the database longer, with more frequent visits, and printing reports to share with colleagues. New types of users are enthusiastic and engaged. Librarians are also pleased that the additional services are drawing users to a quality resource. 

A Significant Transformation
     This is a strong strategic play for Inspec on multiple fronts. Despite the missional nature of the decision, they knew it would be a value-added play, not tied to additional revenue. Projects like this represent a significant commitment of time and resources.  Just considering the roughly 30-40 people involved (some shared with another project) reflects a considerable investment and new staff composition. For example, this includes 4 additional data scientists for statistical analysis, 3 developers designing and implementing linked tables, testing teams and external specialist consultants in addition to technology vendors. At a Rave Technologies talk in December, David Smith, Head of Product Solutions, joked that the first two years involved “trust” by senior management until the vision began to unfold.
     The project started in 2015, along with a decision to upgrade their entire platform and leverage that development. For the past year, 100 institutions have had access to a beta version of Inspec Analytics to provide feedback, soon to come out of beta with access by all Inspec customers as part of their subscriptions. 
  • Besides routine additions to the Inspec A&I database using existing taxonomies and ontologies, Inspec has added linked data back to 2013 (so researchers would have five years of data when they went live in 2018). They are using industry standards wherever possible, and sources like Ringgold for institutional identification. 
  • Inspec has worked with Molecular Connections to create the Analytics user interface and reports. 
  • Linked data is now an ongoing additional part of their metadata creation workflow.
    • Linked data is created separately from traditional indexing.
    • Plans to add linked data retrospectively from 2009 to present.
  • The Graphical User Interface and Analytics and Reporting layers are available only via the Inspec website.  Other platforms may link out to these tools or pursue different integration strategies. 
  • New capabilities and reports continue to be added based on input and requests from user communities. 

     The new flexible data models and related outputs also offers serious competitive advantages besides greater direct engagement with users looking to mine the data, suggesting fertile ground for other benefits to follow.   As Tim Aitken aptly put it when I interviewed him, clearly feeling the excitement from their user community, “Inspec Analytics has unlocked the treasure trove that is Inspec.”  We look forward to watching their space!
Inspec Analytics
Additional visuals and explanations of feature available in Inspec Analytics User Guide.

With special thanks and appreciation to Tim Aitken, IET Inspec Analytics, and Bill Kasdorf, Kasdorf & Associates, for their time and much appreciated contributions!
 © 2019 M Adamson Associates. All Rights Reserved.

<![CDATA[Starting a Conversation ...]]>Sun, 21 Apr 2019 18:45:24 GMThttp://madamsonassociates.com/blog/starting-a-conversationPicture
It’s not that I’m so smart, it’s just that I stay with problems longer.”   
​– Albert Einstein

     Coming from Einstein one can’t help but smile. And yet like much that Einstein says, there is deeper wisdom. In this age of rapid change and disruption, examining issues and opportunities with some persistence is appealing.
     For some time now, I’ve been considering writing a blog that explores topics or themes from multiple angles. When I select a topic or theme, it means making a commitment to return to it for multiple pieces. Pulling out threads to consider and tying them into a larger context; starting a conversation. There is much to be curious about and to explore – the interplay of enabling / disruptive technologies, business models, new services and business opportunities and the people and organizations re-envisioning the future.
      Posts will draw from the broader areas of open scholarship, metadata, enabling technologies, new and old mediums like longform content (as an umbrella term also including monographs), key market segments and players. We look forward to also engaging with others through interviews, profiles, and conversations about future directions and possibilities. 
     If these articles capture your interest, please do follow and share .... 

​“The mind is not a vessel to be filled, but a fire to be kindled.”  Plutarch