When I was younger I once asked my father about the Internet. I wanted to know if I can also have a website for myself online. Some time later my father came back to me with a book called Schrödinger lernt HTML5, CSS3 und JavaScript (Schrödinger learns HTML5, CSS3 and JavaScript). With the help of the book and my father my first HTML website with some colorful <div>s went online. Everybody could reach it. It felt magical.
My enthusiasm was ignited, and I continued to learn through books and self-study. My little corner of the internet developed along with me. It underwent several changes, from a static website to WordPress (there was no way around it) and back to a static website. The magic and fascination I felt with the first version of the website remained and grew.
But as I grew up, so did the Internet. And I have to say, it has developed in a strange direction. Whereas surfing the internet used to take you to lots of different, exciting, funny, colourful places, nowadays you visit tailor-made platforms, your are trapped in bubbles and controlled by algorithms.
This master’s thesis was born partly out of nostalgia and partly out of a certain frustration. I want to recapture the magic I felt at the beginning of my journey on the Internet. So I looked for a way to rediscover it, to make the Internet more visible again and to formulate alternatives to the status quo. /sys/net/visible is an attempt to do just that.
The commercialisation and centralisation of the internet has created a number of problems which are described and addressed in the following contextual considerations. Each problem described is then countered by an idea that formulates (speculative) alternatives.
The Internet was developed as a decentralised network connecting computers to access data from anywhere. Originating from J.C.R. Licklider’s idea of the Galactic Network, the ARPANET was developed as an early implementation. It demonstrated the viability of packet switching—sending data packages between computers—a foundational step towards data communication in computer networks. Additionally, it implemented the TCP/IP protocol as one of the first network protocols to let the communication happen in a standardised way. Both technologies laid out the technical foundation for the Internet [1].
The ability of multiple computers to work together without central control originated from the principle of “open architecture networking”. Key design decisions were: networks remained autonomous, communication was “best effort” (leaving reliability to the endpoints), and intermediate gateways (routers) were kept simple and had no state over data flows. There was explicitly “no global control at the operations level” [1].
Sending messages via email and sharing files via the File Transfer Protocol (FTP) expanded the possibilities of computer communication and data exchange. Ted Nelson formulated the ideas of hypertext and hypermedia [2], which were further implemented by Tim Berners-Lee. The Hypertext Markup Language (HTML) and the Hypertext Transfer Protocol (HTTP) were the final building blocks for the Internet as we know it today - the World Wide Web [3]. It should be noted that these technical protocols were built in a peer-to-peer way, following the design principles described above, making them inherently decentralised.
Beyond the technical implementation, a certain understanding and means of cooperation were fundamental to this decentralised design of the Internet. As Leiner et al. point out
The Internet is as much a collection of communities as a collection of technologies, and its success is largely attributable to both satisfying basic community needs as well as utilizing the community in an effective way to push the infrastructure forward. [1, p. 29]
The exchange within the community and the collective understanding of working together to design the Internet shaped the design choices of the underlying Internet technologies. The Internet’s strong connections to university environments meant that principles such as the open publication of ideas and results were fundamental. Accordingly, new concepts and discussions about the design of the Internet’s infrastructure were made public, enabled by Request for Comments (RFC) [1]. These were exchanged via the File Transfer Protocol (FTP), meaning that the discussion of the Internet’s infrastructure took place on the infrastructure itself.
At the end of the paper Leiner et al. describe the following concern:
The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will be managed. […] With the success of the Internet has come a proliferation of stakeholders - stakeholders now with an economic as well as an intellectual investment in the network. We now see, in the debates over control of the domain name space and the form of the next generation IP addresses, a struggle to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stake-holders. At the same time, the industry struggles to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable technology. If the Internet stumbles, it will not be because we lack for technology, vision, or motivation. It will be because we cannot set a direction and march collectively into the future. [1, p. 31]
The concern about managing the internet’s development proved well-founded as commercial use increased, especially after the decommission of the National Science Foundation Network (NSFNET) in 1995 removed the final barriers to commercial usage of the internet [4]. The late 1990s dotcom bubble followed, marked by a massive venture capital investment [5]. While the subsequent burst of the bubble led to widespread failures, it also consolidated the Internet landscape. Surviving corporations, often with more robust, centralised business models, gained dominance [6]. This period was the beginning of a significant shift away from the Internet’s original decentralised architecture towards the more centralised, platform-driven structure common today.
The previous introduction to the origins of the Internet was important for understanding the design of the Internet and the ideas behind certain technical and social protocols. It should have become clear that the original idea of the Internet was based on open principles, sharing and collaborative decision-making. This makes it all the more surprising to look at the Internet today with its current problems. Three of these problems will be analysed and discussed, each juxtaposed with an idea that formulates an attempt at a solution.
When talking about the infrastructure of the Internet we often refer to it as “the cloud”, a metaphor that obscures reality. Clouds are ephemeral, weightless, and natural—the perfect misdirection from what cloud computing actually represents: massive data centres consuming huge amounts of electricity, requiring gallons of water for cooling, and demanding constant mining of rare earth metals for expansion [7].
This obscuration is no accident. Tech corporations developed centralised infrastructure to meet the demand of increased connectivity and data exchange. In parallel they deliberately cultivated an image of immateriality. The user experience to request data is flawless, feels immediate, at the speed of thoughts. Cloud providers collectively operate energy-intensive facilities to make this experience possible yet market themself behind an immaterial concept of white clouds. The environmental externalities and location of data centres remain conveniently hidden behind login screens and sleek interfaces.
The reality is far more concrete. Data centres contribute 0.3% to global carbon emission in 2019. The Information and Communications Technology (ICT) sector’s carbon footprint contributed 2% in global emission in 2019 [8] with projections suggesting it could claim up to 23% of global emissions by 2030 [9]. Despite public commitments to renewable energy, most frequently data centres rely on fossil fuels [10]. With the current rapid adoption of AI technologies, corporations’ hunger for energy is accelerating, leading corporations to build new nuclear power reactors [11] and gas generators [12]. Even the physical infrastructure—fibre optic cables crossing oceans, server farms placed in remote locations—is invisible, abstracted away from the average citizen.
As the demand for computing power increases, we face not only the challenge of energy consumption, but also a manufactured cycle of electronic waste. In the name of progress, more data centres are being set up at ever shorter intervals, while working equipment sits idle. Getting data centres up and running is the main driver of CO2 production. Once they are up and running, increased data bandwidth is a minor factor [13]. In addition, there are compressed replacement cycles for server hardware that remains operational. According to Gydesen and Hermann, the average economic life of servers is between 3 and 5 years, while their technical life can be up to 10 years [14]. The drivers of this cycle—resource extraction, manufacturing pollution, electronic waste—are typically absent from corporate sustainability reports and cloud marketing materials.
This infrastructure creates a state where complexity builds upon complexity until it becomes incomprehensible. As computing moves further into the cloud, users and developers lose not only understanding but also agency. Current computing systems “go to ridiculous lengths to actually prevent the user from knowing what is going on”, resulting in unobservability and technological alienation [15].
In mainstream computing, “ease of use” is usually implemented as “superficial simplicity” or “pseudo-simplicity”, i.e. as an additional layer of complexity that hides the underlying layers [15].
Resistance Through Visibility and Constraint
In response to this hidden materiality and high consumption of resources, several projects are attempting to uncover and create alternatives to the cloud. The Low-Tech Magazine is a website hosted on a minicomputer in Barcelona powered by a solar panel [16]. During extended periods of cloudy weather, the site goes offline as the required power cannot be maintained. This design choice creates an alternative narrative to what it takes to power a web server. It also exposes the cloud, or more specifically the infrastructure, by creating a tangible relationship between environmental conditions and the availability of the site—if the sun isn’t shining, the site isn’t loading.
The Solar Protocol extends this idea by creating an alternative web hosting model where server locations in different solar time zones take turns hosting content based on available sunlight [17]. Similar to the Low-Tech Magazine, each server is powered by a solar panel. A load balancing logic is implemented that directs traffic to the server with the most solar power at the time of the request, prioritising natural conditions over fast response time, putting an ecologically sound decision first, rather than a capitalist idea of urgency [17].
These projects follow principles of permacomputing and low-tech, two movements which seek to reduce wastefulness, energy consumption and create technology that strengthens rather than depletes ecosystems [15], [18], [19]. As Ville-Matias “Viznut” Heikkilä argues, computing needs to become resource sensitive—aware of surrounding energy systems and capable of adapting to changes in energy conditions [15]. Instead of designing for consistent high-performance regardless of environmental cost, permacomputing advocates for systems that scale down when energy is scarce, preserving essential functions while suspending others [15].
Additionally, permacomputing argues for local control over computation organised by the communities in use of that computation [19]. Therefore, soft- and hardware needs to be decentralised and modularised, to keep single components small, maintainable, and adjustable for community needs [19], [20]. Another benefit of maintaining small components is to remove the reliance on Moore’s law to compensate for software bloat [19].
This implies a certain need for the community to be able to create digital literacy around computing and technology. Knowledge about technology should be easily accessible and technology must be thoroughly documented [21]. In that regard the modularisation of soft- and hardware components is beneficial as it keeps complexity per unit low [19], [20].
Societies should support the development of software, hardware and other technology in the same way as they support scientific research and education. The results of the public efforts would be in the public domain, freely available and freely modifiable. Black boxes, lock-ins, excessive productization and many other abominations would be marginalized. [15]
As an answer to the technological obsolescence implemented by manufacturing companies the movements of permacomputing and low-tech formulated the idea of planned longevity [20], [22]. Hard- and software components should be designed to stay functional as long as possible to reduce resource usage. One possibility would be to design computer chips with redundancy and bypass mechanism to keep them functioning when certain internals wear out [22]. Another strategy is the recycling of old computer components, for example by reallocating them to other functionalities [19].
These movements are not motivated by nostalgia for simpler technology, but rather formulate a critique and speculative idea of how contemporary computing conceals its material impacts—and real alternatives that make those impacts visible and manageable. The cloud may promise infinite scalability, but these movements remind us of the limits of the resources on earth.
The Internet began as a decentralised infrastructure for information exchange, born out of the efforts of publicly funded universities, military research and civil society, which had little or no interest in capitalising on the infrastructure. However, as corporations increasingly recognised the potential of the Internet, a consolidation of power took place. Lacking a clear strategy to market the decentralised nature of the Internet, these corporations began to provide services through centralised platforms [5].
In the beginning, only people with money, time, and knowledge could participate in the Internet by programming their own website, buying a domain and managing their own server. To capitalise on the Internet, corporations began to build platforms that allowed communities to create, modify and share content. This development could be seen as a democratisation of the Internet, but it is rather a capitalisation of user-generated content and data [5].
Platforms have centralised the way content is presented and distributed on the web. When publishing a website, the user has complete control over the website. They can decide who can access and discover their content and applications. By using platforms to create and publish content, users give up much of their control over the content, in the most extrem cases even losing the rights to their content [5].
These platforms not only control how we publish content, but they also have control over how and to whom that content is presented. As users, we have become increasingly subject to algorithmic gatekeeping that determines content visibility based on opaque criteria designed to maximise engagement and profit rather than information quality or user intent. The attention economy uses the data provided by users to power algorithms which decide what is presented to us. A lack of understanding about which data is used and how the algorithms work let us become highly dependent on the algorithms. As Shoshana Zuboff notes our online behaviours become raw materials that are extracted, analysed, and used to predict and influence our future behaviour without our consent [23].
The dependency runs even deeper, precisly describes by Anna Longo as:
Once machines were a means to our ends; then they were the ends which we were the means; now they are oracles that interpret signs and whose prophecies we interpret. [24, p. 1]
Platforms exemplify this transformation, functioning as reward systems that keep us interacting and evaluating our “game strategy” while never knowing how the game might change tomorrow [24]. The mechanics remain blurred—algorithms that determine what we see operate behind impenetrable walls.
With this platforms in place centralising content production and discoverability, corporations developed sophisticated systems to extract and profit from that content we as users gave away for free. This extraction occurs through multiple channels for example by direct marketing against content or harvesting user data to sell targeted advertising. The platform itself captures most of the value while bearing minimal responsibility for content production costs. This economic model enabled companies to gain unprecedented market capitalisation largely based on content not created by them.
Once the social contract has locked in users, a combination of addiction and social conformism makes it impossible for users to freely leave the platform and go elsewhere. [25, p. 14]
This evolution of digital power structures has led scholars like Yanis Varoufakis, Jodi Dean, and Cédric Durand to describe our current system as technofeudalism, suggesting a new economic paradigm potentially moving beyond traditional capitalism [26], [27].
In this view, digital platforms function like feudal lords, extracting value akin to rent (data, content, attention) from users who need access to these digital spaces for social and economic participation. This dynamic is compared to medieval serfs working land owned by lords; users provide value—again through data, content and attention—often without direct wage-like compensation, receiving platform access in return, while the platform owners—the new lords controlling the essential digital land—profit on the majority of the value [27], [28].
Platforms are doubtly extractive. Unlike the water mill, peasants had no choice but to use platforms not only position themselves so that their use is basically necessary (like banks, credit cards, phones, and road) but their use generates data for their owners. Users not only pay for the service but the platform collects the data generated by the use of the service. The cloud platform extracts rent and data, like land squared. [29]
Decentralisation and Organised Social Networks
Geert Lovink offers the concept of organised social networks as a compelling alternative to the dominant logic of centralised social media platforms. The focus here is on network versus platform. Lovink, along with Ned Rossiter, argues that these organised networks provide a way to move beyond the “weak links” and the exploitative “secretive economy of data mining” that characterise social media. Their vision emphasises the importance of stronger ties and sustained engagement among network members, aiming for forms of empowerment that extend beyond the fleeting interactions typical of large-scale platforms [25], [30].
The core principles of organised social networks revolve around enabling intensive collaboration within smaller, more focused groups that share specific purposes. Another crucial element is the strategic communication, designed for effective action rather than mere broadcasting. Before these networks can be created though an elimination of the isolation and creating an overview of collective skills and connections is necessary [25], [30].
The proposed way out consist not so much in the creation of networks as such but building meeting points (note: not platforms). Such hubs or nodes can be designed as points of aggregation, temporary centers of activity. While the platform is aimed at economic exchange and (data) extraction, the aim of the hub is to create commonailty. There can be a thousand hubs but only one plateau. [25, p. 25]
To build those meeting points in which the networks can be envisioned, Lovink also underline the importance of building and controlling alternative infrastructures, including secure communication channels, to ensure a degree of autonomy from corporate platforms and their inherent surveillance mechanisms. This involves a critical engagement with software cultures and a conscious effort to move beyond commercially motivated and potentially compromised social media software [25].
What happens when we decide to put in a massive effort to dismantle “free” platforms, including their culture of subconscious comfort and spread actual tools–including the knowledge of how to use and maintain them? Tech has become a vital part of our social life and should not to be outsourced. This can only be done when priority is given to “digital literacy” (which has gone down the drain over the past decade) [25, p. 74]
The Fediverse—a network of interconnected, independent servers using open protocols like ActivityPub—and its applications, such as Mastodon, Pixelfed, and Lemmy could be understand as an attempt to this alternative infrastructure. With access to a server anyone can host a Fediverse service. This decentralised approach to physical infrastructure parallels the decentralisation of power—no single entity can change the rules for everyone [25].
In contrast to X or Bluesky the federated alternative Mastodon adds an additional layer to discover content. Besides your own network and the whole world Mastodon adds the function to read message from your server, called the local timeline. This creates a focused group of people with the same interest establishing stronger links then fleeting interactions [25].
The Mastodon idea is to show how exciting it is to log into the unknown and realize that there are people who share your interests. [25, p. 112]
The Internet and its interfaces are built with an assumption about who the user is. Assumptions based on ideas about stereotypes and hierarchies that exists in our society. Despite early hopes that the Internet as a digital realm for society would be a place in which race, gender, and class do not matter, the inherent structure of the web reproduces the same power dynamics that shape the offline world. Lisa Nakamura describes the Internet as a “place where race happens”, not one where questions of identity ceases to matter [31].
Therefore platform design decisions are never neutral; digital infrastructure, from visible interface elements and algorithms to underlying database structures and protocols, encodes specific values and expectations [32]. This includes social media optimisation for engagement metrics that foster hostile environments, content moderation systems that inadequately protect users and unevenly apply policies, rigid binary gender categories and biased facial recognition systems [33], [34], [35], [36]. These technical choices create an Internet inherently favouring certain users while marginalising others, reinforcing power imbalances [35].
The consequences manifest as disproportionate online harassment experienced by FLINTA*, BIPOC, LGBTQIA+. and disabled individuals [37]. Privacy violations and the impacts of automated decision-making technologies also harm these communities by reproducing discriminatory patterns. This exclusionary design reflects connected systems of privilege and oppression—what Sasha Costanza-Chock terms the “matrix of domination”—that shape technology, demonstrating how digital infrastructure often amplifies existing societal inequalities [35].
Feminist Infrastructures and Interventions
In response to the biases present in our current technological infrastructure, cyberfeminism has emerged to define a more just handling of technology. Rather than simply seeking representation within existing systems, cyberfeminism questions fundamental design principles that shape the web, technology and thus society. The cyberfeminist perspective recognises that technology is never neutral—it is always designed by someone, for someone, with particular values embedded within it.
The emergence of cyberfeminism, a term gaining traction in the early 1990s starting with groups like VNS Matrix, built upon earlier feminist engagements with science and technology [38]. Donna Haraway’s 1985 A Cyborg Manifesto is often seen as an important precursor in which she introduces her concept of the cyborg—a fusion of organism and machine [38]. That concept offered a powerful idea for rejecting rigid boundaries and essentialist identities, particularly challenging the binary understanding of gender. This was not about integration into the male-dominated field of technology; it was about embracing hybridity and leveraging technology’s potential to transform identity and social structures [38].
Feminist servers represent one of the most concrete manifestations of cyberfeminism. They apply feminist theories through Internet infrastructure. By providing hosting and services they protect feminist, queer, and social justice communities from censorship, surveillance and harassment. Additionally, they create alternative spaces for feminist, queer and non-conformist narratives neglecting the fear to be censored or deleted through content moderation. Projects like Systerserver [39] and Anarchaserver [40] prioritise consent, privacy, and community governance in their technical and social protocols. Unlike commercial platforms where terms of service are imposed, feminist servers develop their policies collectively with those they serve [41].
The goals behind feminist servers extend beyond creating a safer space within existing paradigms. As articulated by the TransHackFeminist network, feminist servers aim to create “gaining autonomy in the access and management of our data and collective memories”, preserving histories and perspectives [42].
Legacy Russel develops the idea of cyberfeminism further by perceiving technical errors as opportunities for intervention. Rather than seeing glitches as failures to be corrected, Russell positions them as “a crucial moment of slippage” where marginalised groups can break through and rewrite the code [43].
A body that pushes back at the application of pronouns, or remains indecipherable within binary assignment, is a body that refuses to perform the score. This nonperformance is a glitch. This glitch is a form of refusal. [43]
These two ideas are united in their insistence that the form of technologies is not predetermined. By pointing out biases in a seemingly neutral infrastructure they create and imagine alternatives. Demonstrated through practice they argue for a more inclusive digital infrastructure.
During my research about permacomputing, organised social networks and cyberfeminism I began to notice two general themes that seemed to run and recur throughout.
All three ideas deal, at least in some parts, with the infrastructure of the Internet and address issues of sustainability, capital and power structures/marginalisation associated with it. Interestingly, the design of the infrastructure is not only discussed as a problem, but also presented as a potential solution. Namely, the physical infrastructure is considered and how its abstraction and centralisation has led to a consolidation of power among cloud providers. The existing problems caused by the centralisation of the physical infrastructure are criticised in particular by the permacomputing and low-tech movements. Resource sensitive design is presented as a solution. In their idea of organised social networks, Lovink and Rossiter also argue for control over the infrastructure, albeit in a more general sense. This control of the infrastructure is necessary in order to escape the current power structures of platforms and ensure autonomy in the development of the organised social network. Cyberfeminism looks at and criticises the infrastructure of the Internet and its design in a similar way, emphasising power structures and prejudices. The application of cyberfeminism by the feminist servers represents a solution in the form of a participatory design of the infrastructure.
The second theme that appears in all three ideas is that of community. Organising in communal structures is described as a solution to the problems of the Internet, especially in terms of centralisation. Permacomputing and low-tech see community organisation as a way of reducing the complexity of the infrastructure. This automatically makes the technological infrastructure more sustainable, as it is easier to control and therefore more responsive to the needs of the community. Lovink and Rossiter cite coming together in specific, theme-based communities as a fundamental alternative to globalised social media. As the example of the feminist servers shows, organising as a community is necessary for cyberfeminism to ensure that the development of social and technological protocols is fair and unbiased. Finally, these proposals remind me of Leiner et al’s descriptions of how the development of the Internet was initially driven by proposals ‘in the open’ and as a community.
The work /sys/net/visible is dedicated to these two overarching themes, making the different ideas discussed above tangible and visible. From these different positions, key points are formulated that this work conveys.
The majority of the discussed ideas criticise a hidden or covered entity. For the movements of permacomputing and low-tech the obfuscation of the cloud—rather the obfuscation of the Internet’s infrastructure—is a problem. The cyberfeminist theorists challenge the hidden biases integrated into the design of the Internet. And the idea of federated networks as well as the theories of Lovink and Rossiter deal with the hidden power concentration through centralisation of platforms and infrastructure. By uncovering the digital and physical infrastructure of the Internet the work /sys/net/visible formulates a number of oppositions to the status quo.
Developed as an installation the work aims to make the infrastructure of the Internet tangible. Therefore, the installations consist of three decommissioned PCs and one decomissioned smartphone that were either rescued from the trash or far-flung corners of households. They have been repaired and cleaned to be displayed. Their internal components are exposed and visible. This follows key arguments of the permacomputing and low-tech movements to repurpose old technical components. Additionally, by presenting the internal components of the PCs the installation de-mystifies and gives an impression on what a web server is and can be. This is grounded in the idea of the permacomputing movement as well as the feminist servers, to educate about and make technology accessible.
In terms of content, the devices have been repurposed as web servers. Web servers of static websites normally require significantly less computing power than a consumer PC or smartphone, which is why the old components are sufficient in this case. The underlying technology was implemented with modern software tools that allow optimised use of the computing power. This approach demonstrates that advances in software can give old hardware a new life.
Each of the devices hosts a website. Every website was created by a different person. The sites were developed by working together as a community to discuss ideas and give feedback. This creates my interpretation of an organised social network working together to explore alternatives to the Internet. In a bidirectional relationship, the creatives and the infrastructure shape each other and the final results are communicated through the websites.
The technical infrastructure ultimately forms the medium for the community to present their ideas. To have control and access over the infrastructure is another aspect of the organised social network. This goes hand in hand with the idea of the feminist servers and their demand for control over the infrastructure to guarantee the freedom to tell stories and share work independently of algorithmic judgement.
Thanks to the control over the infrastructure certain design decisions could be made in the interplay of websites and hardware. This allowed to install a contact microphone into every PC recording the sound of the working machine. With the smartphone the integrated microphone was used. The sound is made available through an API endpoint and streamed to the websites. While browsing you are able to hear the machine that served the website adding another way of making the infrastructure tangible.
The control also allowed to read system data and expose them via an API endpoint. The CPU, RAM and disk usage can be retrieved as well as the temperature of certain components inside the devices. To make the system data available informed the design of the website heavily. On the websites the system data as well as the sound are integrated, on some more crucial then on others but adding to the idea of making the infrastructure more tangible. That the hosting infrastructure and its inherent values inform the ideation process of the website was not planned but support the concept quite well.
When one of the devices fail, the website it serves becomes inaccessible. The reason for the failure could be an internal glitch, an external power outage, a bot attack, or any of thousand other reasons. This reveals the true fragility of the Internet’s infrastructure, in contrast to the over-engineered cloud. Naturally, the anatomy of the network changes as the other sites still link to the failed server. However, the rest of the network remains functional, illustrating the importance of a decentralised approach to infrastructure design.
The friction created by a site going offline, and the small moment of wonder and disruption it creates, is the moment Legacy Russel describes. The glitch made us think about our dependence on technological infrastructure and opened up spaces for new ideas and narratives.
The implementation of the concept is explained below. This is by no means a chronological narrative—rather one in sections. A chronological documentation can be found in Appendix A, which is a logbook that was kept during the preparation of the work.
A crucial part of creating this work involved finding discarded, out-of-use devices. Initially, I was primarily looking for mini PCs. Through Kleinanzeigen, an online marketplace for classified ads in Germany, I sourced a fully functioning Intel Arches Canyon NUC Kit NUC6CAYB from 2016 and three Intel Thin Canyon NUC Kit DE3815TYKHE from 2014, all without hard drives.
My concept was not particularly well implemented due to the focus on mini PCs. On the one hand, exposing the components and thus the infrastructure was not visually convincing, as they are very small. On the other hand, the devices are quite modern and functional devices that were resold. Therefore, the recycling effect was not particularly significant.
In the next step, I therefore concentrated on devices that had not been in use for a long time or were about to be scrapped. One source was the electronic waste from the University of the Arts in Bremen. Over the course of several visits, I was able to salvage a Hyrican PC PCK02282, an HP Pavilion ze4300, a Dell Optiplex 745, a Gericom Hummer Advance 2560 XL and a Power Mac G4. All of the devices were incomplete—memory sticks, hard drives, power supplies and other components were missing. The necessary parts were gathered in various ways, for example from the Digital Media Studio’s parts inventory.
I found two more laptops that were no longer in use at my parents’ house. These were an ASUS A9RP-5057H and a Dell Inspiron 6400.
Based on feedback from Clemens, one of the collaborators, I decided to integrate a smartphone into the project as well. The main appeal here was to demonstrate that a smartphone can be used as a web server. Clemens provided the project with a Huawei P8 and a Fairphone 3, which he still owned. I also owned a Sony Xperia P.
Ultimately, a collection of the following devices was assembled:
From this list, the Hyrican PC PCK02282, the Dell Optiplex 745, the Dell Inspiron 6400 and the Fairphone 3 were ultimately used. On the one hand, the aim was to present a certain diversity of devices, but on the other hand, certain devices could not be reliably made to work. For example, there were problems with the ASUS A9RP-5057H and the PowerPC Mac G4.
After gathering the devices, the first task was to get them up and running again. As many of the devices were missing components, these had to be collected first. For instance, the Hyrican PC PCK02282 was missing a hard drive and memory sticks. The same applied to the Dell Optiplex 745. Fortunately, the Dell Inspiron 6400 and the Fairphone 3 were complete, so no additional components needed to be gathered for these devices.
At this point, certain devices had already been sorted out. For example, no hard drive could be installed in the HP Pavilion ze4300 because the necessary adapter was not available (see Appendix A, Day 2024-12-16). There were also problems with the PowerPC Mac G4. The power supply was broken and it was impossible to obtain an original Apple power supply. Since Apple uses a proprietary PIN layout for its power supplies, no regular power supply could be used. A regular power supply was re-soldered to get the computer up and running but I ultimately decided against using the PowerPC Mac because the fix was unreliable and the device consumed an enormous amount of energy.
I chose Alpine Linux as the operating system because it is very resource-efficient and supports a wide range of chip architectures. Unfortunately, the Gericom Hummer Advance 2560 XL refused to install the operating system, so it was eliminated.
The devices were provisioned with Ansible Playbooks to ensure a consistent configuration.
With the USB-C-to-Ethernet adapters I had available, only the Fairphone 3 was able to establish an Internet connection, so the Sony Xperia P and Huawei P8 had to be eliminated. An Ethernet connection was necessary because all devices had to be connected via a switch during installation to standardise the connection to the Internet.
Unlike the computers, Alpine Linux was not installed on the Fairphone 3. Instead, the web server was run within a Termux session that opened at boot time. However, the tricky part of setting up the Fairphone 3 was that the USB-C adapter could only receive either power or Ethernet, as the Fairphone 3’s USB-C port only supported USB-C 2.0. Since Ethernet was essential, a step-down transformer was installed in a battery frame to convert a 12 V power source to 4 V. This then ensured a constant power supply.
This setup created the basis for using the web servers.
NGINX and the Apache HTTP Server were initially considered for hosting the static web pages. However, it quickly became clear that customisations and API endpoints were essential to the project’s implementation. Consequently, a bespoke web server was developed to deliver the web pages.
The web server was originally developed in Deno, a more modern implementation of Node.js. However, when it became clear that the Dell Inspiron 6400 would be used, this proved to be a poor choice of technology. This is because the Dell Inspiron 6400 is still based on a 32-bit chip architecture, whereas Deno and Node.js are interpreter-based programming languages that only run on 64-bit systems.
Therefore, using a compiled programming language was considered advantageous as it can be adapted to the relevant chip architectures and operating systems. Furthermore, this enables the server to be run as a pure executable file. This saves resources and ensures more efficient execution on older devices.
Go was chosen as the compiled programming language because its standard library provides a variety of tools for implementing HTTP servers. Apart from an endpoint that provides the static files, the following endpoints were implemented:
/api/health was only used by the reverse proxy to check whether the server was running and whether it could direct traffic there.
/api/system provides various system data for the device. This includes information on CPU utilisation, memory utilisation, storage usage and temperature.
/api/audio provides audio recordings from the contact microphones as a stream. For smartphones, the integrated microphone is used.
With /api/audio/play, an audio file attached to the static website can be played on the server. This endpoint was primarily implemented for Clemens’s website.
/api/websocket provides a WebSocket connection to the server. WebSockets enable real-time, parallel synchronisation of data between the server and connected clients. This endpoint was primarily implemented for Lars’s website.
In addition to the endpoints, the web server was extended to execute PHP scripts, which are used by Jean’s website.
The web server process made the websites available on the respective devices on port 80. At startup, an SSH tunnel was also started on the devices to a VPS which forwarded the port. This was necessary to enable the websites to be accessed easily and securely via the internet in an exhibition context. Traefik ran on the VPS as a reverse proxy, forwarding requests to the relevant tunnels.
In addition to the infrastructural aspect of the work, the creation of the websites was another fundamental component. Each of the four web servers was populated by a different artist. The first task, therefore, was to recruit collaborators for the project.
I contacted a selection of potential collaborators from my network. Each message was accompanied by a PDF (see Appendix B) that briefly described the concept and how it would be implemented. I also invited them to a Miro board where I had already formulated approaches and ideas for the websites. By sending the information material, a comprehensive picture of the idea was conveyed so that the potential collaborators could assess whether participating in the project would be of interest to them.
All potential collaborators who were contacted showed interest and found the idea exciting. Unfortunately, some were unable to participate due to time or capacity constraints. Ultimately, Jean Böhm, Clemens Hornemann and Lars Hembacher contributed a website to /sys/net/visible. During the course of the project, Lars Hembacher was supported by Paul Eßer and Rebecca Herold.
The Miro board continued to be used to support collaborative work. Approaches and ideas for the website were collected here. Discussions were also held on how the websites could be connected to each other via API interfaces, for example, thereby developing a stronger network.
Four websites were created for the four web servers. Everyone engaged with the topic of the internet and its infrastructure in some way, inspired by the collaborative process and in line with the work. The websites are only online when the work is exhibited as an installation, at which point the servers are switched on. The websites can be accessed via the following links:
I created the text website, which serves as an entry point to the work /sys/net/visible. In the early days of the internet, discussions about how to design its infrastructure were conducted using so-called Request for Comments, which were published on the internet itself. Similar to this idea, the website documents the concept and process behind /sys/net/visible, as well as serving as a connection point from which all other websites can be accessed.
I created the website. It is hosted on the Hyrican PC PCK02282.
The /sys/net/spectrogram website displays and plays a spectrogram. The sound you can hear and see is based on values from the server that hosts the website. By switching between CPU usage, memory usage and temperature, the website creates a sonic interpretation of the server’s internal state.
It also captures sounds from your device’s microphone and adds them to the mix. By visiting the /send subpage, you can send your own text to the website, which will appear in the spectrogram shortly afterwards. In this way, the website creates a collaborative soundscape between client and server, human and machine.
The website was created by Jean Böhm. It is hosted on the Dell Optiplex 745.
The Inspect game takes you on an enigmatic journey of discovery. During your travels, you will visit various places that shape the internet, exploring the vast internet landscape. The game is designed so that you travel from your website to your client device, then to your router, then to the ISP access point, and so on. To reach the final destination—the server hosting the website on display in the exhibition—you need to solve riddles and find hints. In a playful way, an HTTP request is explored, educating and demystifying how internal internet processes work.
The website was created by Clemens Hornemann. It is hosted on the Dell Inspiron 6400.
A park is a confined space. Although it is open for people to use freely, its open nature means that, as it becomes more crowded, the privacy experienced by any particular group decreases. Through their presence, people communicate with the outside world and share their activities.
Bearing this in mind, Lars, Paul and Rebecca reimagined the network as a park. The network itself is the park, with the servers acting as landmarks. While walking through the ‘network’, people can communicate with each other. Proximity determines participation. Conversely, it is possible to exist passively within the network or simply pass through it. This concept is visually interpreted on their website.
The website was created by Lars Hembacher, Paul Eßer and Rebecca Herold. It is hosted on the Fairphone 3.
Last but not least, an installation was needed to truly display the devices. This would make them visible and communicate that they are functioning as web servers. It should also be clear that they share and create a network together.
A three-dimensional frame was created using aluminium angle profile, in which the devices were positioned and displayed. All devices were displayed openly. This meant that their technical components were visible. Each device was illuminated by an integrated light source to draw attention to its components and highlight the technical infrastructure.
Additionally, all devices were connected to a network switch that provided internet access. This was structurally integrated into the profile structure. Ethernet cables were laid prominently within the profile structure to emphasise that the devices were part of their own network.
The devices were mainly business devices from the 2000s. The installation was therefore supplemented with office items from that period, such as a laptop bag, shirt, shoes, hole punch and coffee cup, to provide context and emphasise the history of the devices. These items served to emphasise the context and history of the devices.
The installation was exhibited for the first time at the Oldenburg ComputerMuseum.
/sys/net/visible deals with the infrastructure of the Internet in a multifaceted way. It discusses the use of physical hardware and visualises software processes in different contexts. While developing the project, I often wondered if the multifaceted nature of the work meant that some topics were overlooked and did not receive the attention they deserved. I couldn’t answer this question, but I still feel that focusing on one of the sub-aspects might have made the work more meaningful. Furthermore, the idea of linking the individual websites to each other has only been partially implemented. I would have liked to see this developed further.
However, I have not finished exploring these topics yet, and /sys/net/visible will be developed further. Some of the collaborators want to develop their websites further. There are also some bugs that need to be fixed on the server side. Furthermore, the source code is to be revised and published as open source so that others can benefit from it too.
Like the Internet itself, /sys/net/visible will continue to evolve.
Appendix A:
See attachted file: appendix-a_logbook.pdf
Appendix B:
See attachted file:
appendix-b_request-for-collaboration.pdf