Recent from talks
Nothing was collected or created yet.
Web resource
View on WikipediaThis section has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
A web resource is any identifiable resource (digital, physical, or abstract) present on or connected to the World Wide Web.[1][2][3] Resources are identified using Uniform Resource Identifiers (URIs).[1][4] In the Semantic Web, web resources and their semantic properties are described using the Resource Description Framework (RDF).[5]
The concept of resource has evolved during the Web's history, from the early notion of static addressable documents or files, to a more generic and abstract definition, now encompassing every "thing" or entity that can be identified, named, addressed or handled, in any way whatsoever, in the web at large, or in any networked information system. The declarative aspects of a resource (identification and naming) and its functional aspects (addressing and technical handling) weren't clearly distinct in the early specifications of the web, and the very definition of the concept has been the subject of long and still open debate involving difficult, and often arcane, technical, social, linguistic and philosophical issues.
From documents and files to web resources
[edit]In the early specifications of the web (1990–1994), the term resource is barely used at all. The web is designed as a network of more or less static addressable objects, basically files and documents, linked using Uniform Resource Locators (URLs). A web resource is implicitly defined as something which can be identified. The identification serves two distinct purposes: naming and addressing; the latter only depends on a protocol. It is notable that RFC 1630 does not attempt to define at all the notion of resource; actually it barely uses the term besides its occurrence in Uniform Resource Identifier (URI), Uniform Resource Locator (URL), and Uniform Resource Name (URN), and still speaks about "Objects of the Network".
RFC 1738 (December 1994) further specifies URLs, the term "Universal" being changed to "Uniform". The document is making a more systematic use of resource to refer to objects which are "available", or "can be located and accessed" through the internet. There again, the term resource itself is not explicitly defined.
From web resources to abstract resources
[edit]The first explicit definition of resource is found in RFC 2396, in August 1998:
A resource can be anything that has identity. Familiar examples include an electronic document, an image, a service (e.g., "today's weather report for Los Angeles"), and a collection of other resources. Not all resources are network "retrievable"; e.g., human beings, corporations, and bound books in a library can also be considered resources. The resource is the conceptual mapping to an entity or set of entities, not necessarily the entity which corresponds to that mapping at any particular instance in time. Thus, a resource can remain constant even when its content---the entities to which it currently corresponds---changes over time, provided that the conceptual mapping is not changed in the process.
Although examples in this document were still limited to physical entities, the definition opened the door to more abstract resources. Providing a concept is given an identity, and this identity is expressed by a well-formed URI (Uniform Resource Identifier, a superset of URLs), then a concept can be a resource as well.
In January 2005, RFC 3986 makes this extension of the definition completely explicit: '…abstract concepts can be resources, such as the operators and operands of a mathematical equation, the types of a relationship (e.g., "parent" or "employee"), or numeric values (e.g., zero, one, and infinity).'
Resources in RDF and the Semantic Web
[edit]First released in 1999, RDF was first intended to describe resources, in other words to declare metadata of resources in a standard way. A RDF description of a resource is a set of triples (subject, predicate, object), where subject represents the resource to be described, predicate a type of property relevant to this resource, and object can be data or another resource. The predicate itself is considered as a resource and identified by a URI. Hence, properties like "title", "author" are represented in RDF as resources, which can be used, in a recursive way, as the subject of other triples. Building on this recursive principle, RDF vocabularies, such as RDF Schema (RDFS), Web Ontology Language (OWL), and Simple Knowledge Organization System will pile up definitions of abstract resources such as classes, properties, concepts, all identified by URIs.
RDF also specifies the definition of anonymous resources or blank nodes, which are not absolutely identified by URIs.
Using HTTP URIs to identify abstract resources
[edit]URLs, particularly HTTP URIs, are frequently used to identify abstract resources, such as classes, properties or other kind of concepts. Examples can be found in RDFS or OWL ontologies. Since such URIs are associated with the HTTP protocol, the question arose of which kind of representation, if any, should one get for such resources through this protocol, typically using a web browser, and if the syntax of the URI itself could help to differentiate "abstract" resources from "information" resources. The URI specifications such as RFC 3986 left to the protocol specification the task of defining actions performed on the resources and they do not provide any answer to this question. It had been suggested that an HTTP URI identifying a resource in the original sense, such as a file, document, or any kind of so-called information resource, should be "slash" URIs — in other words, should not contain a fragment identifier, whereas a URI used to identify a concept or abstract resource should be a "hash" URI using a fragment identifier.
For example: http://www.example.org/catalogue/widgets.html would both identify and locate a web page (maybe providing some human-readable description of the widgets sold by Silly Widgets, Inc.) whereas http://www.example.org/ontology#Widget would identify the abstract concept or class "Widget" in this company ontology, and would not necessarily retrieve any physical resource through HTTP protocol. But it has been answered that such a distinction is impossible to enforce in practice, and famous standard vocabularies provide counter-examples widely used. For example, the Dublin Core concepts such as "title", "publisher", "creator" are identified by "slash" URIs like http://purl.org/dc/elements/1.1/title.
The general question of which kind of resources HTTP URI should or should not identify has been formerly known in W3C as the httpRange-14 issue, following its name on the list defined by the (TAG). The TAG delivered in 2005 a final answer to this issue, making the distinction between an "information resource" and a "non-information" resource dependent on the type of answer given by the server to a "GET" request:
- 2xx Success indicates resource is an information resource.
- 303 See Other indicates the resource could be informational or abstract; the redirection target could tell you.
- 4xx Client Error provides no information at all.
This allows vocabularies (like Dublin Core, FOAF, and Wordnet) to continue to use slash instead of hash for pragmatic reasons. While this compromise seems to have met a consensus in the Semantic Web community, some of its prominent members such as Pat Hayes have expressed concerns both on its technical feasibility and conceptual foundation. According to Patrick Hayes' viewpoint, the very distinction between "information resource" and "other resource" is impossible to find and should better not be specified at all, and ambiguity of the referent resource is inherent to URIs like to any naming mechanism.
Resource ownership, intellectual property and trust
[edit]In RDF, "anybody can declare anything about anything". Resources are defined by formal descriptions which anyone can publish, copy, modify and publish over the web. If the content of a web resource in the classical sense (a web page or on-line file) is clearly owned by its publisher, who can claim intellectual property on it, an abstract resource can be defined by an accumulation of RDF descriptions, not necessarily controlled by a unique publisher, and not necessarily consistent with each other. It's an open issue to know if a resource should have an authoritative definition with clear and trustable ownership, and in this case, how to make this description technically distinct from other descriptions. A parallel issue is how intellectual property may apply to such descriptions.
See also
[edit]References
[edit]Citations
[edit]- ^ a b RFC 3986 Uniform Resource Identifier (URI): Generic Syntax
- ^ Roy T. Fielding's Dissertation
- ^ What do HTTP URIs Identify?, by Tim Berners-Lee
- ^ RFC 1738 Uniform Resource Locators (URL)
- ^ RDF Current Status
Sources
[edit]- Web Characterization Terminology & Definitions Sheet, editors: Brian Lavoie and Henrik Frystyk Nielsen, May 1999.
- A Short History of "Resource" in web architecture., by Tim Berners-Lee
- Presentations at IRW 2006 conference, Web resources
- Towards an OWL ontology for identity on the web, by Valentina Presutti and Aldo Gangemi, SWAP2006 conference Archived 2006-12-24 at the Wayback Machine.
Web resource
View on GrokipediaDefinition and Fundamentals
Core Definition
In web architecture, a resource is defined as anything that can be identified by a Uniform Resource Identifier (URI). This broad definition encompasses a diverse array of entities, enabling the Web to function as a universal information space where identification is the foundational mechanism for linking and interaction.[2] Concrete examples of resources include digital objects such as HTML documents, images, numbers, and strings, which are information resources whose essential characteristics can be conveyed in a message, as well as services like APIs accessed via URI endpoints. Examples also include non-information resources, such as people or services, which cannot be fully represented as information but can be identified and interacted with via URIs, for example, identifying a person indirectly through a "mailto:" URI like "mailto:[email protected]," which denotes an Internet mailbox. Another illustration is the URI "http://weather.example.com/oaxaca," which identifies a weather report as a specific resource.[2][3][4] Resources form the core building blocks of web architecture, supporting a uniform interface that facilitates consistent interactions between agents, as articulated in the REST principles integrated into the Web's design. This interface relies on standardized methods for identifying and manipulating resources, promoting scalability and interoperability across the networked system.[5][6] Web resources are distinguished from non-web resources by their addressability within the Web's information space, typically via protocols like HTTP, which enable retrieval and manipulation of representations; resources lacking such web-accessible identification fall outside this scope.[7][8]Key Characteristics
A web resource is fundamentally identifiable through a unique Uniform Resource Identifier (URI), which serves as a global reference to distinguish it from all other resources regardless of their nature or accessibility. This identifiability ensures consistent naming and location across distributed systems, allowing components to reference the same resource unambiguously in interactions.[9][10] Resources are not directly accessed or manipulated; instead, interactions occur via representations, which are data formats that capture the current or intended state of the resource, such as HTML for a document or JSON for structured data. These representations, often transferred over HTTP, include metadata and content that reflect the resource's state at a specific point, enabling content negotiation to select appropriate formats based on client capabilities. For instance, the HTML content of a webpage serves as a representation of the underlying resource identified by its URI, while the resource itself remains an abstract entity.[11][12][13] Interconnectivity is a core property enabled by hyperlinks within representations, which reference other resources via their URIs, fostering the hypertext structure of the web. This linking mechanism allows resources to form a navigable network, where users or applications can traverse from one to another seamlessly, embodying the distributed hypermedia nature of the architecture.[14] Web resource interactions adhere to a stateless model, meaning each request contains all necessary information for the server to process it independently, without relying on prior exchanges or server-maintained context. This constraint, aligned with HTTP's design, promotes scalability and reliability by treating every access as self-contained, though application-level state can be managed client-side or via external mechanisms.[15][16][17]Historical Evolution
Origins in File Systems
In early computing, resources were primarily conceptualized as files stored on physical media such as magnetic tapes or disks, providing persistent storage for data and programs.[18] These files served as the fundamental units of information management, allowing users to organize and retrieve data through hierarchical structures introduced in systems like Multics in the 1960s and later refined in Unix during the 1970s.[19] Identification relied on filesystem paths, which specified the location within a directory tree, such as/usr/bin/[ls](/page/Ls) in Unix-like systems, enabling navigation from a root directory to the target file.[18]
During the 1970s and 1980s, advancements in networking extended this resource model beyond isolated machines. The ARPANET, operational since 1969, facilitated the sharing of files as network resources through protocols like the File Transfer Protocol (FTP), first specified in 1971 by Abhay Bhushan and implemented to allow transfers between heterogeneous hosts.[20] By the mid-1970s, FTP enabled anonymous access for browsing and downloading files across ARPANET nodes, treating remote files as accessible resources similar to local ones, though primarily for research collaboration among universities and institutions.[21] This development marked an initial step toward distributed resource sharing, with ARPANET growing to nearly 100 nodes by 1975 and FTP becoming a core application for exchanging documents and software.[20]
However, these early networked resources faced significant constraints that limited their scalability and flexibility. Without a global naming scheme, identification depended on host-specific tables maintained manually at each site, making it difficult to reference resources uniformly across the network until the Domain Name System (DNS) emerged in the 1980s.[21] Access was inherently tied to physical server locations and network connectivity, with transfers vulnerable to disruptions in site-specific infrastructure, such as reliance on dedicated lines between fixed nodes.[20] Furthermore, resources lacked abstraction from their underlying formats; files were often bound to specific binary or text structures dictated by the source system, requiring users to handle compatibility manually without standardized representations or content-independent access.[19]
A pivotal shift occurred in 1989 with Tim Berners-Lee's proposal for an information management system at CERN, which envisioned a hypertext-based framework to overcome the rigidity of static file resources.[22] Titled "Information Management: A Proposal," this document addressed the challenges of tracking evolving project data across distributed teams, proposing links between documents to create dynamic, interconnected resources rather than isolated files dependent on local or host-bound access.[22] This concept laid the groundwork for moving beyond filesystem and early network limitations toward a more abstract, globally addressable model.[22]
Transition to Web-Based Resources
The emergence of the World Wide Web in the early 1990s marked a pivotal shift from localized file systems to globally accessible web resources, transforming static files into networked entities through the introduction of Uniform Resource Locators (URLs) within HyperText Markup Language (HTML). Invented by Tim Berners-Lee in 1990, URLs provided a standardized way to identify and link resources across distributed servers, enabling files to be served remotely rather than accessed only on local machines. This integration allowed ordinary documents, such as HTML files, to function as web resources when hosted on servers, decoupling their availability from physical proximity and fostering a new paradigm of information sharing.[23][24] Central to this transition was the Hypertext Transfer Protocol (HTTP), developed by Berners-Lee between 1989 and 1991, which standardized the retrieval of resources over networks and further separated resource content from its storage location. HTTP operated as a client-server protocol, where browsers could request and receive representations of resources—initially simple HTML pages—via uniform addresses, making data exchange efficient and scalable across the nascent internet. Unlike file systems limited by local hardware constraints, HTTP enabled resources to be fetched dynamically from any connected server, laying the groundwork for the web's hyperlinked structure.[25][26] Key milestones accelerated this adoption: in 1991, Berners-Lee launched the first website at CERN, an informational page describing the World Wide Web project itself, which demonstrated resource access over the internet for the first time. The release of the Mosaic browser in April 1993 revolutionized public engagement by introducing graphical interfaces and inline images, making web resources intuitive and appealing to non-technical users and sparking widespread exploration of online content. By mid-decade, these developments had propelled the web from an experimental tool at research institutions to a global platform.[27][28][29] The initial focus on static HTML files soon expanded to dynamic content generation, exemplified by the Common Gateway Interface (CGI) introduced in 1993, which allowed servers to execute scripts and produce resources on demand in response to user requests. CGI scripts, often written in languages like Perl, enabled interactivity by processing form inputs and querying databases to assemble customized HTML outputs, evolving web resources from fixed documents to responsive applications. This shift addressed the limitations of static serving, paving the way for early e-commerce and personalized experiences.[30][31] Post-2000, the Representational State Transfer (REST) architectural style, formalized by Roy Fielding in his 2000 dissertation, profoundly influenced web services by elevating resources as the core abstraction for designing scalable APIs. REST principles emphasized uniform interfaces for manipulating resource representations via HTTP methods, making resources the central, addressable elements in distributed systems and standardizing their role in modern web architectures beyond mere document retrieval. This framework addressed scalability challenges in growing web applications, solidifying the resource-centric model that underpins contemporary services.[6][32]Identification and Access
URI-Based Identification
A Uniform Resource Identifier (URI) serves as the foundational mechanism for uniquely identifying web resources in a decentralized environment, enabling global reference without reliance on centralized authority. Defined by a standardized syntax, URIs provide a compact string that denotes a resource's identity, allowing systems to reference, link to, and interact with it consistently across the web. This identification system underpins the web's scalability by ensuring that resources can be named and discovered independently of their physical location or representation format.[1] The generic URI syntax, as specified in RFC 3986, consists of a hierarchical structure that parses into several components for precise identification. It begins with a scheme, which indicates the protocol or namespace (e.g., "http"), followed optionally by a double slash and an authority component comprising user information (if present), host, and port. The path follows, representing a sequence of segments delimiting the resource within the authority's namespace, potentially appended by a query component (introduced by "?") for additional parameters, and a fragment identifier (prefixed by "#") that specifies a secondary resource or subcomponent. This syntax allows for relative references and resolution processes to handle incomplete forms, ensuring flexibility while maintaining unambiguous parsing. The scheme and authority establish the root context, while path and query refine the specific identifier, with the fragment serving as an intra-resource pointer.[33] URIs encompass several subtypes tailored to identification needs: generic URIs act as abstract identifiers applicable to any resource; Uniform Resource Locators (URLs) extend this by including location information for retrieval; and Uniform Resource Names (URNs) provide persistent, location-independent names within defined namespaces. URNs follow a specific syntax starting with "urn:", followed by a namespace identifier (NID) and namespace-specific string (NSS), such as "urn:isbn:0451450523" for a book, emphasizing naming over access. In contrast, URLs incorporate scheme-specific locators like host and path to enable direct resolution. This typology allows URIs to function as pure names when location is irrelevant, supporting the web's evolution toward semantic and distributed systems.[33][34] Core principles governing URIs include persistence, uniqueness, and delegation, which collectively ensure reliable identification. Persistence requires that once assigned, a URI continues indefinitely to refer to the same resource, as changing it disrupts links and erodes trust; Tim Berners-Lee emphasized that "cool URIs don't change," advocating designs that avoid tying identifiers to transient structures like file paths or organizational hierarchies to achieve stability over decades. Uniqueness mandates that distinct resources receive distinct URIs, preventing collisions and enabling global interoperability, with owners responsible for avoiding aliases that could confuse references. Delegation operates through hierarchical control, where domain name owners (via DNS) manage sub-paths under their authority, allowing sub-delegation to further refine resource namespaces without central oversight. These principles foster a self-organizing name space resilient to growth and change.[1][35] W3C guidelines position URIs fundamentally as names for resources, decoupled from any specific retrieval method to promote architectural neutrality. In the Web Architecture, URIs identify resources abstractly, without implying access protocols or representation formats, allowing the same URI to denote diverse entities like documents, services, or abstract concepts. This opacity principle advises against inferring resource properties from the URI string itself, treating it solely as an identifier to support evolving technologies. Such guidance ensures URIs remain versatile tools for identification, independent of how or whether the resource is accessed.[1] To address limitations in ASCII-only URIs, Internationalized Resource Identifiers (IRIs) extend the framework by incorporating Unicode characters, enabling non-Latin scripts in identifiers while maintaining compatibility through conversion to URIs. Defined in RFC 3987, IRIs use the same syntactic structure but allow UCS characters in components like authority, path, query, and fragment, with percent-encoding for interoperability. Introduced in 2005, this extension supports global linguistic diversity, allowing URIs to represent resources in languages like Chinese or Arabic without transliteration, thus broadening web inclusivity. An IRI maps to a URI via UTF-8 encoding and escaping, preserving the identification principles of the original URI syntax.[36]HTTP Interaction with Resources
HTTP serves as the primary protocol for interacting with web resources, enabling clients to request, retrieve, modify, and delete representations of resources identified by URIs. The protocol operates on a request-response model, where a client sends an HTTP request to a server, which responds with a status code, headers, and optionally a message body containing the resource representation. This interaction is stateless, meaning each request contains all necessary information for the server to process it independently of prior requests.[37] HTTP defines several methods that specify the desired action on a resource. The GET method retrieves a representation of the resource without modifying it, making it safe and idempotent for repeated use. In contrast, the POST method submits data to create a new resource or trigger a server-side process, potentially altering state and not being idempotent. The PUT method updates or creates a resource at a specific URI, replacing the entire representation if it exists, and is idempotent. The DELETE method removes the resource at the specified URI, also idempotent, though servers may return a success status even if the resource did not exist. These methods align with the uniform interface constraint in REST, treating resources as nouns in URIs while methods act as verbs to manipulate them.[38][6] Servers respond to these requests with status codes that indicate the outcome. A 200 OK status signifies successful processing, typically returning the requested representation for GET requests. The 404 Not Found code indicates the server cannot locate the resource at the given URI. For scenarios where a resource exists but its representation requires redirection, such as after a POST creating a new resource, the 303 See Other status directs the client to a different URI for the resulting representation, avoiding confusion in content negotiation. These codes provide standardized feedback on the interaction's success or failure.[39] Content negotiation allows servers to select the most appropriate representation of a resource based on client preferences, such as media type, language, or character encoding, specified in request headers like Accept. For instance, a client requesting a user profile might specify Accept: application/json, prompting the server to return JSON data, while another requesting text/html receives an HTML page of the same resource. This mechanism ensures flexibility in delivering tailored representations without altering the underlying resource.[40] RESTful architectures build on these HTTP features to create scalable web services, emphasizing stateless communication where each request from a client contains all context needed by the server. Resources are addressed via URIs as primary nouns, with HTTP methods defining operations, promoting a uniform interface that simplifies scaling and caching. This statelessness enhances reliability, as servers do not retain session state between requests.[6] Modern protocol versions extend HTTP's efficiency for resource interactions. HTTP/2 introduces multiplexing, allowing multiple request-response exchanges over a single TCP connection via independent streams, reducing latency from head-of-line blocking in HTTP/1.1. HTTP/3 further improves this by using QUIC over UDP, incorporating built-in multiplexing, encryption, and connection migration to handle network changes, thus optimizing resource retrieval in variable conditions. These advancements maintain semantic compatibility while enhancing performance for resource access.[41][42]Semantic Extensions
Abstract Resources Beyond Documents
In the architecture of the World Wide Web, resources extend far beyond tangible documents to encompass abstract entities that cannot be fully captured in a single digital representation. These abstract resources include conceptual or non-physical objects such as "the sky," mathematical constants like π, or dynamic phenomena like "tomorrow's weather in Oaxaca," all of which can be identified by Uniform Resource Identifiers (URIs).[1] For instance, a URI such ashttp://example.org/pi might resolve to a document describing the value of π, its historical significance, or computational approximations, thereby providing partial views of the underlying abstract concept rather than the concept itself in its entirety.[1]
The World Wide Web Consortium (W3C) formalized this expansive notion of resources in its 2004 Architecture of the World Wide Web, Volume One, defining a resource as "whatever might be identified by a URI," which deliberately includes both information resources (like documents) and non-information resources (such as physical objects or abstractions).[1] This formalization emphasizes that abstract resources may have multiple representations—data sent in response to a retrieval request that convey aspects of the resource's state—allowing the Web to interconnect diverse entities through shared identifiers.[1] However, this breadth introduces challenges in distinguishing the resource from its representations; for example, a URI like http://example.com/the-sting could ambiguously refer to a film, a musical performance, or a related discussion forum, leading to URI collisions where a single identifier maps to multiple intended resources.[1]
Building on this foundation, the Linked Data principles, outlined by Tim Berners-Lee in 2006, further extend abstract resources by advocating the use of URIs to name and link real-world entities, such as people, places, or events, enabling a machine-readable web of interconnected data.[43] These principles specify that URIs should identify "any kind of object or concept," with dereferencing providing useful information in standard formats and including links to other URIs, thus grounding abstract and real-world entities in the Web's fabric without relying solely on document-centric views.[43] This approach addresses limitations in earlier Web practices by promoting unambiguous identification of non-document resources, fostering applications like knowledge graphs where entities like geographical locations or historical figures are treated as first-class Web resources.[43]
Integration with RDF
RDF, or Resource Description Framework, structures web resources as the foundational elements within a graph-based data model, where information is expressed through triples consisting of a subject (a resource), a predicate (a relation or property), and an object (another resource or literal value).[44] This triple format allows web resources to serve as nodes in interconnected knowledge graphs, enabling the description of relationships between entities such as documents, images, or conceptual classes.[45] For instance, a triple might assert that a specific web page (subject resource) has a title (predicate) of "Example Document" (object literal), thereby modeling metadata about the resource in a machine-readable way.[44] In RDF, web resources are universally identified using Internationalized Resource Identifiers (IRIs), which generalize Uniform Resource Identifiers (URIs) to support international characters and are commonly HTTP-based for web accessibility.[45] These IRIs can denote both concrete web resources, such as an image file athttp://example.org/photo.jpg, and abstract resources, like the class http://xmlns.com/foaf/0.1/Person representing a concept of personhood in the FOAF vocabulary.[45] This uniform identification scheme facilitates linking diverse resources across the web, allowing abstract entities—such as those beyond physical documents—to integrate seamlessly into semantic descriptions, as explored in discussions of resource abstraction.[44]
To manage vocabulary terms and prevent naming conflicts, RDF employs namespaces, which are defined via IRI prefixes like rdf: for the core RDF vocabulary (http://www.w3.org/1999/02/22-rdf-syntax-ns#) and owl: for the Web Ontology Language (http://www.w3.org/2002/07/owl#).[](https://www.w3.org/TR/rdf-schema/) These prefixes shorten full IRIs in serializations, such as Turtle, making it easier to reference standard resources like rdf:type for classifying subjects or owl:Class for defining ontological categories.[46]
Querying RDF-structured web resources is facilitated by SPARQL, the W3C-recommended query language that retrieves and manipulates data by pattern-matching triples across RDF graphs or datasets.[47] For example, a SPARQL query can select all resources of type foaf:Person linked to a specific web document, enabling federated searches over distributed knowledge bases.[47]
The evolution of RDF culminated in the 1.1 specification released in 2014, which enhanced resource modeling by adopting IRIs for broader internationalization, introducing RDF Datasets to support named graphs for context-aware resource grouping, and adding serialization formats like Turtle for more concise representations.[48] These updates improved interoperability and expressiveness for web resources in semantic applications without altering the core triple structure.[48]
