The diagram corresponds to Figure 4 of the doctoral thesis, Applications of Syndication for the Management of Bibliographic Catalogs. The graph illustrates all the steps that enable the functioning of content syndication. First, there must be a phase of content creation and editing. This means that content syndication without content is impossible; therefore, it relies on information published through digital media using various publishing tools, such as blogs, wikis, portals and websites, directories, social web platforms, etc. Any individual can become the author of a syndication channel without being aware of it, simply by posting comments on a post, a photograph, or a third-party comment. It is highly likely that, without noticing, a syndication channel is formed from the information that ordinary users write daily—consider the application of syndication techniques to email. However, for a syndication channel to incorporate content written by an editor, columnist, blogger, researcher, student, or professor, a triggering action is required: the act of “Publish” or “Save” within the tools used at each moment.
Figure 1. Diagram of the functioning of content syndication in the web environment
Workflow diagram for content syndication in web environments
When the author clicks on the publication button, the written information is automatically stored in the database associated with the website, and subsequently, even before any response is returned, the syndication feed is automatically regenerated. This feed consists of a file, typically with an .xml extension, whose content is entirely automated according to the structure of one of the established syndication formats—such as RSS1.0, RSS2.0, or Atom. Any of these formats can store, field by field and tag by tag, a substantial portion of the information published on the web. From this point onward, the dissemination phase begins—the most visible and familiar to users. Syndication feeds are accessible via <link> elements located within the <head></head> header tags. Initially, the presence of such links is easily detected by web browsers, enabling users to subscribe to the syndication feed containing news or publications from a web resource. This process is known as aggregation, and the programs used for this purpose are called aggregators. In many cases, aggregators are also open-source programs that enable the organized reading of all syndication feeds marked by the user, allowing complete storage and utilization of information published on the web. For this purpose, a data transfer process is required, moving the data from the .xml file of the syndication feed to the user’s aggregator database. Although this step may seem insignificant, it is fundamental to understanding how to leverage content syndication techniques for documentation purposes. In any case, the aggregator program includes a module or subprogram known as a parser. A parser is a program that facilitates the reading of a file encoded according to a set of conventions. Applied to syndication, it is capable of interpreting the tags used in a specific format to correctly extract and store the contained information—or prepare it for proper display and utilization by the user on screen. Such programs can be modified to read various encoding formats, enabling the general process of information redistribution to also be applied to the redistribution of specific bibliographic records, even up to entire library catalogs, thereby transforming it into one of the most versatile processes in modern documentation.