Motivation: Allow Humans to Live on Earth Indefinitely and Sustainably
Objective: Create Order out of Chaos
Preparation: Be Ready for Anything
The author is a programmer with
12+ years of experience who has been developing a multi-person system in an area
of 10,000 m2 (100 ha). The first part of this site specifies what is
needed to perform the role of programming. The second has to do with the role
proper. The fitting of a profession within a multi-person system can be used as
a template for other trades and professions. Everyone needs a place to live to
be productive. The developed model (schematic below) shows one version of how
that might look. The net output from this model is expected to be greater than
the sum of its parts. The highlighted workspace section indicates that the
author is a programmer, part of the Applied Cluster.
Programming—the art and science of writing code—requires more than a
laptop, aptitude, skill and experience. It requires the right environment and
supporting infrastructure. This begins with nutrition, clean, clear water, fresh
air and a good night's sleep. Next is a distraction free, equipped space for
working. After that is the ability to interact with those in complementary
disciplines. From this is expected to come code that is modular, well written,
resilient and stable. Balancing time at the keyboard with time away happens with
hands-on outdoor and workshop activities. Motivation and creativity is sparked
by exposure to the arts, music and invigorating story lines found in good books
and videos. Experience is in HTML, CSS, PHP, MySQL and the Apache HTTP Server,
operating in a Linux environment. Past focus has been on web development,
primarily page speed, security and the development of an integrated bundle
package for use with WordPress. Early programming work involved writing code in
TPL (Table Programming Language) to validate datasets and produce publication
ready tables. Later work involved creating a comprehensive data-based program
using Visual Basic written in C#, in an industrial agriculture setting. Current
focus is shifting to work with datasets, with subject matter selected from
genetics, geology, climate, species and exoplanets. It is expected that within
the next five to ten years, technology will be released that will make this
subject matter selection more relevant.
Welcome to the world's simplest possible website format. It is (currently)
the simplest possible because it consists of only one page. That's right, just
one page. Why is that? Because pages add complexity, and complexity adds to
cost. Not money cost, directly, but time, energy and resources; which
translate into money in a money based economy. Even if the worry that
goes into meeting needs in a dollar based economy is removed, the complexity
will remain, like the Dyson sphere that surrounds a star[1] in the classic
Kardashev Type II civilization[2], after the star has gone super nova. Thus,
removing a layer of complexity, places less dependence on the technology, and
leaves the time, energy and resources remaining to devote to content,
rather than the presentation of that content.
To put it another way, we have all been "bamboozled", which is the word I give
to something that has gone undetected for a long enough time, that our
grandparents (let alone our grandparents) don't remember how it all happened, or
when. All we are left with are archetypes and myths. Greuling, sweaty hand
labour has been gratefully replaced by machines. Then the dulling task of
operating machines has been gratefully replaced by digital equipment, now
robotics. However, the pendulum swing to automation has progressed today such
that few know how it works. This includes the internet. Although the content of
a web page is text that can be accessed via a protocol, two or three layers of
complexity cover this simplicity, making the original message incomprehensible
amongst the noise. Even though the screen the author is typing on and the screen
the reader is reading on is nearly identical, a priesthood of technological
savants sits between the author and and the reader, making it difficult--if not
impossible--for the average person to understand how the communication transfer
occurs. Regardless, this generation has resorted to 1800s like telegraphy--when
texting each other--while using their Star Trek like superphones.
This brief introduction sets the stage for what follows. The more complex the
subject matter, the simpler the means of recording that subject matter needs to
be, in my opinion. The best form of communication occurs in person. The second
best (in my experience), by reading, then by listening to audio, then video.
Learning occurs better when the learner is in a relaxed, almost hypnotic state;
similar to that when just waking up or falling asleep. To achieve this, I need
to read a book or listen undistractedly to someone speaking. Over the past dozen
years, I have come across many websites; some better than others. What few have,
however, is the ability for the text of the site to be read in an undistracted
fashion, moving cleanly from one page to the next. The standard laptop screen is
not set up for reading, nor the desktop screen. To accomplish this, the text
first needs to be set up so that the concepts flow a linear fashion,
then it needs to be formatted so that it can be read in a linear
fashion. This begins with the author and how they format the text. Then, and
only then, can complex or interesting subject matter be presented, in a way that
can be read by the reader who is in the right state of mind.
When growing up, I had most of what I needed, and lacked little. The times
were not especially great. We were in what I would still consider a post war
boom, even though the war had ended 35 years prior. In fact, my life was
relatively uneventful. That is, until I stepped out and began doing what I
considered to be my life's work: being a missionary. I felt called to this at
the age of twelve, and confirmed this calling at the age of twenty five, at
least to my own mind. As a result I joined a group I was involved with at
university, raised support, and began working for them full-time. Although
this is not directly related to the topic of this site (programmer), it is
crucial information for what will come later and is related to the answer given
to the question stated at the top: be ready for anything.
Be ready for the chaos that will arise when you step out for the day. And be
ready if you don't, because it will find you even if you hide in your burrow.
This is the message that Jordan Peterson ends with in his twenty second lecture
in a course entitled "Personality". Even though focussed around the topic of
Personality, from a psychological perspective, it might be better entitle as
"The Meaning of Life, the Universe, and Everything", as Peterson includes a
history of Psychology, but reaches much farther back than the 1800s. He goes
all the way back to the garden, and the pictures that depicted it, to explain
what humanity has known for a very long time about the constant struggle waged
in this universe.
Now what does this have to do with programming? If life were orderly, and the
world was peaceful, how much need would there be for promoting the self,
conveying information, or analyzing vast amounts of data? Much less, I would
guess. Systems would be set up. They would work as intended. And nothing would
topple these castles. They would remain standing and each person could fulfill a
role within the walls of the city, within the walls of defined order. When order
is dispersed on a cyclical basis a better strategy, in my opinion, is to look at
the dandelion, and the many parts of nature that are like it. It blooms early.
Creates a lot of seeds, and then these seeds are carried by the wind to start
over somewhere else. This concept could be called a "fractilized system", where
each part contains the whole.
In three paragraphs, we have moved from an idyllic beginning (order), to a
disruption of that ideal (chaos), to a rebuilding (order). However, there is a
difference. My parents missed something. My church missed something. And the
schools I attended missed something. Being prepared. Despite their best
efforts. Despite their best intentions. They didn't get it right. They didn't
equip the ship they had built with what it needed. The captain lacked training.
And further, the maps given were the wrong ones. Oops. How long did it take to
figure that out? About thirty years. That is an expensive mistake. In
fact, I have observed that many people. Most people have got the wrong
maps. If they have a ship, it leaks. If they were thrown overboard,
they would sink. If they attempted to reach a shore, they wouldn't be able to
find it, because their maps are wrong.
Is it possible to fix a set of errors of this magnitude? The ship is
inadequate, the captain is unprepared, and the maps have been messed up. What
to do. The solution I have been developing begins with the basics. What do I
need to live? Water, food, gear, shelter. What do I need to move? The ability
to walk, and the energy to do so (see step one). What do I need to work? The
right gear. The right environment. Training. And the motivation to keep
after the same task, day after day, year after year. That last part isn't easy.
I need to know who I am, why I am here, and what my role on this planet is. Only
then, when all that is worked out, can I begin to be effective.
It turns out that it is possible to begin at the beginning, and that this
beginning is not difficult. It begins with an empty box, and in this box I put
what I think I need next. Over time, over the course of a few years, if I am
careful, I will end up with a box of gear that provides the basics. How large
is this box? Under a 100 litres. How much does this gear weigh? Likely under
50 kg. The next step is keeping it secure. If I begin with a 36"Wx18"Hx16"D
secure steel box with a hinged lid, I can padlock that and be reasonably certain
it will be there the next day, with my stuff in it, ready to use. And with
that, I have just reduced the need to secure my belongings in a house,
townhouse, apartment, trailer or vehicle to near zero. The secure steel storage
provides that functionality. This is the beginning of a fractilized design.
The envisioned future, for me, is simple. It is to have a mix of the right
kind of people, in the same place, who work together to get the job done. That
is about it. "The job" is anything that needs to happen to support the people
who are there. This includes providing food, water, shelter, clothing, and
transportation (the basic set of needs), innovating on the systems required to
produce and maintain those needs, and then using the stable, resilient structure
that has been created to understand, monitor and analyze the natural world
around them, so that others can benefit from the knowledge that they have gained
and the systems that they have developed and refined. At the time of this
writing, I am at about the five and a half year mark on this project, that I
began at the end of 2016. At that time, I was sitting on the second floor of an
off grid, circular straw bale house with passive solar heating and a single wood
stove, comfortable, and in idyllic surroundings. This was sufficient to inspire
me to begin. Who do we need to get the job done? What is the right mix of
people? This is how I began.
The second part of this project, once the group of people was roughly
defined, was where to put them. Do I put like with like, or mix them up? I began
with the intent to "mix them up", but this didn't work very well. It proved too
complicated. Instead I created a set of clusters with a particular focus. These
are currently: academic, applied, trades, arts, gardening (food and nutrition)
and information (analysis, writing, photography, audio, video and presentation).
The placement of each cluster on the property schematic is done in the same way
that a tire is fastened on a wheel hub, by moving around the hub in the same way
a star is drawn, going through the center each time. This maintains the pressure
on the wheel in a consistent fashion, thereby reducing the chance that the wheel
will be distorted. The assumption is that the mix of a group of people will
create an invisible, but perceptible dynamic. Treating this dynamic in the same
way that physical forces are treated, is anticipated to result in a similar
effect.
A third major aspect of this project is the modular nature of its design.
This means that parts are designed to be interchangeable, from the ground up. In
the recreation of the Apollo 13 mission, the challenge was to take a square
carbon dioxide filter and make it work with a circular housing. It is hoped that
building from the ground up with low tech fallbacks tested and in place, will
prevent the need for a design fault to turn into something worse. This isn't
easy. Past the tested backpack kit system, is a workplace module. Its current
design is a 10'x10'x10' module, with nested components, inside which an
individual is able to perform most, if not all, of the tasks associated with
standard desk work. This will include programming, writing, editing, audio
recording, video recording and video conferencing. A major difference between
this setup and that with which many are familiar (a desk on a floor), is that
the entire module can be picked up and moved around. Once everything is in
place, it stays there, preventing the need to take apart a working system and
put it back together again, simply to change locations.
As I sat down this morning, on the 28th of July, 2022, I was finally able
to "put the chip on the circuit board" and complete the picture. This is shown
in the schematic at the top of this page. That is, we
each function within a set of nested systems. Some of these are implied, some of
these are explicit. A company, to which we go to work, has an explicit structure
which must be followed so that company can produce the goods or services it is
designed to do. Another way of looking at this is that—while we may buck the
notion that we are a "cog on the wheel"—when we are at work; it is a necessity.
This is more strongly pronounced in the military. The soldier, and all those of
different ranks, must perform their role as required, or the entire
system will fall apart. Once this is recognized—that our on time
is as an essential part of a well oiled machine—the rest becomes
easier.
Given the above, the question for this part of our lives is not, "How can I
be free?", but "How can I be part of a well designed system that uses my talents
to their best effect?" While many people may not spend their days designing the
system they are in, someone has and aquiesence to a system is an
implicit approval. "If you don't like it, make it better" sounds to me like a
better motto than complaining, but going along with it. Designing a system that
determines how multiple people will interact is similar to other types of
design. Consider that the effort going in to making a grocery list for one
person isn't a whole lot different than making a grocery list for a family, or a
grocery list for three families, or five families. It is simply operating at a
different scale.
The skills required to program are similar to the skills required to design
schematics for a multi-person system: a community. Who is needed? How many?
Where do they need to be placed? Then, once that is done, the designer can place
themselves within that picture, and disappear to be as one among many.
A hierarchical structure with a high level of top down control is needed in an
environment with many uncertainties. However, once the system has been defined
and proven to be stable, the hierarhical structure can be levelled, to create a
more democratic system, where trained, skilled and knowledgeable people provide
insight for the entire system, based on their experience. Finally, the system
defined as "sociocracy" sends feedback from the "bottom" of the system to the
"top" to create an improved feedback loop. This system was created by a Dutch
ship building company, and proved to be effective.
It is assumed that the design of the space in which we live and work can be
arranged to enhance the experience and result in better outcomes. A balance of
time alone and time interacting with others allows the individual to focus on
their work when they need to, and interact with others when they need to do
that. There should be a division between these two spaces, but this distance
should not be too great so as to inhibit work interactions.
On the floor of the office building I worked at while at Statistic Canada,
there was a cafe, where we could grab a coffee and a donut and on the main floor
was a larger cafe. A ten minute walk away, and further, were more choices.
However, most of the interacting we did while at work was on the floor of the
division, around the desks where we worked. The designed space for the
multi-person system presented here assumes that each individual has their own
workspace, if not their own free standing workshop. This may be something they
build themselves, and have the freedom to do so, within the general structural
constraints imposed by the overall design.
Thus, the question remains: Should a node on the outer rim of the cluster be
reserved for group interactions? It has been intended all along that the commons
area for each cluster be a place where children can play safely, while being
observed by one of their parents, not more than a few steps away. The central
commons area has been thought of as more of a spiritual center, where people can
go to mediate, be at peace, and envision what they would like to see happen for
their community, and the world at large. This leaves a node on the outer rim of
each cluster as a place to meet for focussed discussion related to that cluster.
This is not part of the design feature set at this point.
A key design specification for the workspace module is to make it movable.
That is, it must be able to be picked up from above or from its base, and moved
as a whole. This will allow the contents to remain as is, without having to be
taken apart for a move to take place. The interior space is imagined as being
about a foot greater than the reach of the occupant. For a six foot adult, this
results in a structure seven feet across the width, seven feet across the depth,
and seven feet from the floor to the top of the inside of the structure. This
will allow access to each part of this structure, while remaining more or less
stationary in the center of it. The edges may be bevelled to remove unreachable
corners and making the structure more aesthetically pleasinng. A suggested bevel
is one, one and a half or two feet. The focus of the interior is on typical desk
work, which includes: typing (writing and programming), keyboarding, viewing
multiple monitors, printing, listening to music, audio recording, video
recording and video conferencing. Eating and drinking while seated at the desk
also needs to be factored in, by leaving enough room in front of the keyboard
and to either side. The desk height and shape should be ergonomically designed,
so that it is at a comfortable height.
The audio and video related specifications are a move away from a typical
desk setting. Building an acoustiscally neutral environment may be accomplished
by angled surfaces (which strong bevels would provide), surface materials, and
placement of speakers and a microphone or microphones. More than one microphone
may be needed, depending on the level of professionalism desired. Similarly,
more than one camera angle may be helpful; this may include one at the side and
one from behind, viewing the screens. A background, or multiple backgrounds can
be achieved by pulling one down that is needed. While these audio and related
video specifications, which move away from the typical desktop setting, are more
involved, it is expected that the results will amply repay the additional cost
and effort of setting them up. The internet is rife with videos of otherwise
well meaning people who haven't gotten around to creating a adequate recording
environment, even though many of these same people have excellent content to
share. Specifying a work, recording and conferencing environment which can be
set up once, and then remain that way, is like being able to drive off the lot
with a functioning car, rather than having to build it from spare parts and
spare cash over the years. If everyone had to do that with their vehicles,
imagine what would be on the roads!
As programming is mind intensive (it requires attention to detail, the
ability to manipulate abstract concepts and the use of logic) attention paid to
how the body and mind is recharged is expected to achieve better results (higher
quality code, fewer errors and more lines written, for example). It doesn't need
to take much. Time by the water, a swim, and a rest outside for a few hours work
wonders. Related to this is the observation that nature also appears to take a
break during the mid day hours. The time as the sun is rising and as the sun is
setting are called the "golden hours". This is because the light is better for
photography. After paying attention to how much work I can get done during
different times of the day, it appears that these early morning and late
afternoon hours are also better for detailed work and long term thinking,
respectively.
Conversely, even though I worked full time at a desk job using the Table
Programming Language to create tables from datasets, seven and a half hours a
day, and got paid well for that work, the full time presence didn't result in an
even distribution of work accomplished. This type of uneven distrubition of
effective work is known. At this time, I am not aware of any way to influence
productivity tailing off for full-time desk day jobs that is being used. There
may be some, I am simply not aware of any. Therefore, paying attention to nature
and how creatures such as duck and geese stop feeding and simply rest mid day,
may be a clue as to the type of schedule we ought to consider. This section is
being written after resting for a number of hours outside, having had a swim and
doing a few simple chores. I now feel much better (had been tired from some
travel), and so am tackling this section while I am thinking of it.
Related to this, of course, is the food we eat and the water we drink. I was
surprised to find out that foods heavy in the right types of fat are good for
us, particularly our brain. Vitamin C is needed, as our bodies do not produce
it. And a magnesium and calcium supplement may be recommended. Regardless of the
specifics, attention paid to what we consume is also expected to result in
positive benefits.
Sleep is essential. Without it, within 48 hours, functionality is reduced.
Following a minimalist approach, a concept for nautically inspired resting berth
has been developed; focused on allowing the individual to obtain a solid nights
sleep. There is more water on this planet than land, with many lakes, steams and
rivers in addition to the oceans, especially in the area of Canada in which this
concept is being developed. Thus, the structure is designed to be floatable, and
yet be at ease and look at home on land. Horizontal symmetry results in a
visually balanced structure that will translate into balance on the water. Dual
gull wing doors (not shown) allow for a refreshing cross breeze on warmer days,
while the heavily insulated 6" walls provide a snug environment for the colder
winter months. The compact internal volume, coupled with a snug envelope, is
expected to remain comfortable being warmed by body heat alone: 80 watts is
produced by the resting adult. More heat can be obtained by adding a few hundred
more watts from a secondary source.
This site has begun with one page. If done right, it will be able to contain
the bulk of what is expected on a typical website, and can be made to look like
a multi-page site, if needed. There are a number of advantages to this approach.
From the perspective of the viewer—once loaded—most, if not all of the content
is available after the first page load. The server can indicate that this
content should be cached by the browser for a week to ten days; thus once
loaded, the reader can access the content later, even if offline. Second, as
most, if not all of the content is available from the first page load, no
subsequent page loads are needed. This solves the problem of the reader landing
on the home page of the site, and then abandoning it before loading a second or
third page. One reason for this is that subsequent page loads incur a delay,
which are inconvenient. A second is that the viewer has no idea where to go
next. By including the whole content from the beginning, the relevant content
which would have appeared on other pages on the site, but not loaded by the
viewer, ensures that this relevant content is accessible without subsequent
round trips to the server. Third, search engine site indexing requires the site
to be crawled, interpreted and ranked. The probability search results display
individual pages requires extensive processing and updating. This approach
strips away this excess behind the scenes work by providing a high signal to
noise ratio. The essence of an entire book is provided up front, providing the
viewer (and the indexer) with a high degree of value for the investment of
visiting the domain or subdomain and waiting the few seconds it takes to load
the content found there.
In the meantime, the hardware I am using has changed, which has influenced
the software, as well as the interface, the keyboard. The net result of these
changes has devolved the experience. It is more difficult to do what I
was doing with the setup I am using now, than it was before, when using a laptop
with a full keyboard, etc. Adapting, however, is resulting in an initial "one
page" site. We shall see how it goes. I intend the format, the look or the
structure, to be like that of Wikipedia, with a collapsible "Contents" section
that provides links to the content of the page. In this case, all of the links
will point to the same page.
One of the little known secrets of the web, is that a static web page loads
incredibly fast, by design. It is the dynamic effects added as the page is
assembled by the server and after it is delivered that slows the internet down
in general, and page delivery on dynamic sites in particular. Smaller dynamic
sites are often underpowered, and —unless the author takes pains to implement
caching and other strategies—the web pages may load three to five seconds slower
than they need to, resulting in a poorer experience for the user, and ultimately
resulting in fewer views (and sales) for the site author. I had spent a lot of
time determing why page loads were so slow, and found that—with careful
tweaking—I could get a cached page to load on a dynamic site in under a
second.
Under one second static page loads should really be the benchmark for the
web. But even this level of technicality is likely beyond the reach of most
people. "Static page load?" "Cached page?" Regardless, if the entire page is
text and loads as a single file, it should load quickly, even if it is five, ten
or even fifty pages in length. However, did I spend all of this time learning to
program, to focus on page creation, page design and load times? Not really. To
put it another way, if an individual requires help to set up a website this
simple, there is something wrong. All of the skills I have just used in the past
two hours of keyboarding are skills used everywhere else when creating accounts,
uploading media, and so forth. This missing pieces to the puzzle are not
technical, they are conceptual. It is in the better interests of hosting
companies to sell packages that are underused, and to offer technical support
when asked, rather than to reveal the actual cost of delivering content. It is
quite low.
When I began programming, my intent was to use open source software and to
direct my efforts to ecological issues. That is, my question was, "How is it
that our planet is doing so poorly?" and "Why are so many people so poor and
unable to provide for their needs?" Even here in Canada, many appear unhealthy.
Sitting at a low cost big box store, I counted about 5 out of 100 who looked
fit, healthy and well dressed. The remainder had poor body shapes, where
hunched, poorly dressed, etc. I did not expect that it would be that difficult
to spot fit, healthy people, but it was. If we are at "the top of the
evolutionary scale", it would be expected that we would be better able to care
for ourselves. From appearances, we aren't able to do that. In contrast, I doubt
I could find one animal in a thousand that was deformed or looked unfit or out
of shape. Perhaps one in ten thousand. Certainly not the level that we see in
the human population.
The objective of this layout is to provide a simple
format that allows for a focus on content, rather than on the creation of pages
which must be stitched together with an indexing or menu system. Page load time
is a factor in the number of pages viewed on a website. Having to click on a
link to view a new page may cause the viewer to hesitate, thus preventing them
from learning what is on that page. This could be a product, a service, or
knowledge. It is expected that creating a complex, dense single page website
will work better for content that is text heavy, which books typically are.
Pages can be made to wrap using a combination of CSS (styling) and JavaScript;
but do not need to be. The content not currently viewed can be hidden by
default, with perhaps only the headings showing to indicate they are there.
Thinking about placing the entire content of a typical website on a single
page also removes the need for an underlying server based language such as PHP.
This alone removes much complexity. If content needs to be retrieved after a
page is loaded, this can be accomplished using AJAX, an aysynchronousr call to
the server using JavaScript. The PHP behind many websites can and does grow to
many thousands of lines without the viewer being directly aware of this. The
only indication that a site may be using a lot of code to load a page will be a
delay in the loading of the page. This is the cause of many sites initially
taking three to five seconds to load. Solving this problem for heavy code based
sites after the fact is a lot of work. It either takes an technical individual
with a lot of experience to do this well, or it takes the purchase of a service
from a host dedicated to the platform being used. This can turn the under $100
a year site into something that costs three times as much.
Finally, when
the internet first came out, sites were created with site builders that
generated the underlying HTML and CSS. This prevented the need for the site
author from having to learn these languages. However, the focus here was still
the creation of one page per topic. Thus the information included in the
"Contact" and "About" pages each generated a single file. For every small
change in that page, including in the styling or in the menu, the entire file
needed to be regenerated and reuploaded. With a "single page" (or file)
concept, a small change in the file requires the entire file to be saved and
reuploaded. However, there is only one file to deal with. This makes it more
certain that the update will be applied online, and not forgotten because it is
one file among many.
The design of a site consumes significant resources. Although the styling
technology (CSS) has matured over the years, it is precisely because so
many options are available that work needs to be done to determine what
style to use. If no style is applied to the text by the site author, the browser
will apply it's default styling. For those who prefer a certain look
(and are annoyed by a text that is too light or too small) styles can be applied
by it to achieve a consistent site-to-site look. These browser applied styles
can override the site style, which should be considered by site authors choosing
a style that is too far off the beaten path. Thus, using no styling at all
is a form of styling, but it is the default one. With so much text
flowing across the internet, it is difficult to determine what comes from where.
Thus one objective of styling is to differentiate one site from the next. This
results in the individual site owner carefully picking through the available
themes to find what best fits the project they have in mind and which one sets
them apart from all the rest.
The styling dilemma also cropped up because the width of desktop monitors.
They are typically wider than the comfortable reading width of text, so the
empty space needed to be filled. It was initially filled by web developers who
wrote code to populate the left and right margins with nifty widgets. The only
problem was, when the need for a mobile view came along, all those nifty margin
filling widgets were now extraneous, and more work (by the same web
developers, of course), needed to be done to remove them. Then along
came the "mobile first" motto, which meant that sometimes the desktop theme got
neglected and ended up looking like a large smartphone screen. The mobile
experience is typically on a view screen with a small form factor. The limited
space available means that most of the styled elements from the desktop site
need to be removed. If theyare removed, why bother adding them in the first
place? What isleft is a theme that looks something like the one used here at the
time of writing, a single column with a white background and grey margins, with
a standard font.
The point is that it doesn't take much to write text that can be transitted
via the internet protocol to another device upon request. That is essentially
what is happening here. The complexity arises when that basic task becomes
overloaded with everyone wanting to jump on the wagon on every single
trip across the vastness of the internet. Understand what exactly
the internet is and what is being accomplished by a page view, and the task
changes from one having a high level of difficulty, to one which is much, much
simpler. For a "one page" website, like this is while starting out, the focus
shifts from creating behind the scenes structural components to writing crisp,
clear text, and making sure it is cohesive. This is a lot easier when it is all
on the same page, as the flow and the focus move more naturally from topic to
topic rather than jumping around, as can happen when creating individual
pages.
It is estimated that resilience is inversely related to complexity. The more
complex a system is, the less resilient it is. Resilience here is defined as the
ability "to last" and to remain functionality in the event of a system
compromise. Can I still extract the essence of this site from the underlying
text, if everything else goes haywire? The answer is, yes. The underlying
text—the copy—is formatted in much the same way as it is presented when marked
up and styled. The lines are limited to a readable length. And, since it is
text, the file is readable on any computer system which can read text formatted
as UTF-8; which is most, if not all, of them. That means that this format has a
high degree of resilience.
Conversely, what is the purpose of a highly complex site? A highly complex
site could serve a number of purposes. It could have on it a vast amount of
information. This information needs to be categorized and be found. This in
itself does not add much complexity, an index and table of contents of the like
could be auto generated, leading to vast amounts of findable information.
However, if that same information needs to be editable by multiple
authors, be reviewed, have revisions and so forth, the level of complexity jumps
markedly. Consider, on the other hand, the site produced by a single author. The
individual author wants to produce a blog, and the only person editing the
content on their site will be them. Do they need thousands of lines of code that
provide a lot of extra functionality they do not need, and then spend a lot of
time making that code secure, optimized and backed up? The answer is, no not
really.
Another purpose of a complex site is to allow the individual author the
ability to sell their product or service. However, the addition of code that
processes financial transactions introduces a significant security risk, that
may be better handled by third parties. In addition, the economic model assumed
is that transactions will be conducted anywhere around the world. The internet
opened up a global market to the individual. However, the "internet gold rush"
was short lived, and at the current time, a lot of skill and technical savvy is
needed to compete in the international or national internet market. As a result
of this shift, the author has been working on a model for a multi-person
system (described above) that takes care of much of the need to market
individually produced products and services, by being attentive to how
people are arranged in the same geographic location, rather than bringing
the information to people in diverse locations and assuming this will result in
sufficient sales to support the indivudal site author.
The open source movement developed in response to the close source and
proprietary licensing structure which resulted in functioning software of a
reasonable quality, but at a relatively high cost to the end user; considering
that the distrubition comprised only a small part of the overall cost of
development. However, in the opinion of the author, the open sourced movement
has gone too far. As a reaction to cost, it has emphasized the word "freedom":
freedom to use, freedom to give away and freedom to change. While this obviously
reduces the cost to the end user to almost nothing, it has had a number of
effects that are not immediately obvious. The first of these is that lesser
quality software can and does creep into the mix; primarily because so much is
produced, it is difficult to perform quality checks on it, before it is allowed
for distribution. The second is that it creates an uneven playing field for
software developers. There are those who can and do write software as a hobby
and then release it for free. They may have full time paying jobs in software
development, and so find it easy, perhaps even trivial, to create specific "add
on" functionality for an open sourced platform. This is perfect for the end
user, but it doesn't reflect the true cost of that specific add-on. The costs
have been absorbed by the larger software company paying the developer. They
haven't disappeared.
The net result of this dynamic is that the full-time developer may have to go
to extraordinary lengths to ensure an income stream. One common route is to
offer a basic version for free, and then offer one with more functionality at a
premium. As long as enough people buy the premium version, this works for the
developer. However, it doesn't guarantee that an hour worked is an hour paid.
Further, the virtual landscape changes over time, so the strategy that works for
a few years may have to be changed into something different over time; that is
as of yet, unknown. Finally, the formula for an even distribution of talent,
quality software and users who have what they need is arbitrary, uncontrolled
and messy. It is similar to content produced for video. Vastly more content is
produced than can be viewed on a regular basis. This results in some content
receiving millions of views, while others receive only a few. While this appears
to be part of a fundamental principle (given the name the Pareto Effect),
recognizing the open sourced movement has limits and drawbacks is a start to
finding a system that results in a more balanced dynamic.
In fact, a motivation for developing a balanced system, where people from
select trades and professions can work together in a coherent fashion, was the
realization over time that the average user needs a trained, skilled
and experienced developer to succeed over the long term, and the developer (as
well as each of the others of the select trades and professions), needs a fairly
reliable set of the products and services of others. That is, the ability to
specialize and do it well requires that the specialist has the full support of
others as part of a well designed infrastructure. Once this is realized, it
becomes a matter of determining how to go about this, rather than
whether or not it should be done. Finally, the specialist must be
supported by others to specialize, by definition. It is implied then, that those
who do (and do it well) have found the support they need, pulled out of the
randomness of the society around them. The difference here is that the
programmer has turned his attention to the community and dealt with it as a set
of manipulable objects, arranging them for better effect. This obiously
can be done. It is merely a matter of determining if the arrangement
depicted at the top of this page is sufficient, or if there is a better one
waiting in the wings.
Moving data around and backing it up—including that which comprises the
writing we do—does not need to be complicated, nor does it need to be put off.
However, there needs to be a clear and simple way to accomplish this, with a
path that is travelled on a regular basis, so that it is familiar. I do not
recall loosing data of my own, as I have made a regular habit of backing it up,
but I recall loosing data that wasn't mine. It was a double mistake. I bypassed
a surge protector that must have been hit by lightening before (and thus was
disabled), and the person in charge of this data, had declined a backup option.
As I had no experience with an actual dysfunctional surge protector, it didn't
dawn on me that this was the problem. And as the person in charge of this data
had little experience with storing data on digital media, it must not have
registered that backing up was a good idea; and worth the effort. The result was
lost data.
The format used here has a single file containing all of the text, a single
stylesheet file and a limited number of media files, mostly images. This makes
it easy to back up. The first pass through is laying down the writing track.
This is similar to writing the script before the movie is made. The characters
need to be fleshed out, and the plot formed. Once that is done, a storyboard can
be made, after that, the scenes are defined and actors selected. When writing,
creating or finding images to support the text can be quite an effort. Leaving
them for a second pass makes it easier to keep the focus on writing. Then, once
images are added, they need to receive captions and be defined in the "alt"
attribute of the image element, for the benefit of those using screen readers.
Assertions made require references. The assertions need to be checked for
accuracy. After this, the author may decide to narrate some or all of the text.
This could be done by the author, a voice over actor could be hired, or a
combination of both could be used. Finally, the whole could be placed in video
format, with vignettes being supported by a little more drama, to bring it to
life. And yet, the key--the starting point--is the single file of text.
Another benefit of having a single file is that it can be shared more easily
with others by making it available online or offline via an external storage
device. Aside from images or styling, that one file makes means that it is all
that needs to be copied. It is simple and can be read from its source, with only
a text editor, if needed. The alternative is having more than one file to
accomplish the same job. That "more than one" could be an unweildy mess of 50 to
a 100 files or more, difficult to manage and difficult to work with. WordPress
and Concrete CMS allow for an multiple people to create and edit the same text.
Using an online content manage system, however, adds several layers of
complexity, most of which is eliminated when going back to a single, text and
HTML based format. This method is not expected to be suitable for all cases, but
when one or only a few people are involved, it may be preferable due to its
simplicity.
Unfortunately, most site authors miss an essential feature when setting up
their site for the first time. This site directory structure is malleable, and
can make functionality, maintenance, backup and navigation easy, ...or a
nightmare. To be honest, site developers and platform creators must bear much of
the weight of this responsiblity. Yet, the initial setup of the directory
structure makes a lot of difference. Consider that a website folder (or
directory) can be likened to a room in a house, or better, a space on the
factory floor. How the factory is set up makes all the difference. A lot of
time, money and energy goes into getting it "just right". Tasks are divided by
speciality. The engineer and architect design. The builders build. The factory
workers work. Yet, when it comes to how site directories are structured,
historical precedence determines—in most cases—how these "factory spaces" are
set up. Further, if attempting to move these spaces around in ways not built
into the code, it complains, and makes it difficult—if not impossible—to
perform the adjustment.
One major initial adjustment made on this site is to declare that the
directory in which this content appears is "public". This is done simply by
placing the content in a directory by that name (i.e. /public).
What you see is what you get. And the word means what it says. If the author
places content not deemed "public" in this directory by mistake, the name will
lead to self correcting behaviour. "Oops, I made a mistake. That wasn't supposed
to go there. It was supposed to over here". And the correction
can be made. Achieving digital precision is a finely tuned task. The easier it
can be made, the better. Doing this may seem pedantic at first glance. However,
consider what happens when it is time for the site to grow. Perhaps the site
author would like a space for members to join. Is that public? No, it is only
for those who have signed up (and likely paid a fee). The same goes for a shop.
Or perhaps classes will be offered in the future. Whatever the case, assuming a
future distinction at the beginning will make it a whole easier to make
that adjustment if and when that time comes.
The concept of using directories as purpose built spaces—the same as a
factory floor or warehouse has purpose built areas—leads well into what I
consider to be the overuse of domain names. The word "domain" generally covers a
broad area. Consider that—in biology—all of life is divided into only a two or
three "domains" depending on the system used: Archaea, Bacteria, and Eukarya for
one system or Archaea and Bacteria for the other[1]. Historically, the word has
meant:
"A geographic area owned or controlled by a single person or organization."[2]
Thus, to create multiple domains for one person or organization means that the
word is poorly understood. Behind the confusion, however, remains the lack of
structural integrity applied to directories. If fewer domains and subdomains are
to be used, directories within that domain need to be better utilized. The
directory structure used here is intended to do that; or at least a part of it.
Again, it is unfortunate that platform creators of the platforms the author has
tested are not leaders in this area. Discovering this has taken a lot of time.
The result of efforts in this direction has resulted in a structure that allows
for a hybrid result. Dynamic content can still be used as part of a domain, but
the rationale for the structure used has improved.
Early work with SEO (Search Engine Optimization) revealed that individual
pages needed to be optimized to facilitate that page showing up in search
results for search terms related to content on that page. This resulted in that
page being accessed directly, apart from the rest of the site. It is like
opening a book to a page, without having read the rest of that book. This
pattern of search and retrieval has led to a fragmented experience. While the
viewer can eventually piece together a coherent picture, there is no guarantee
this will happen. In contrast, placing the bulk (or all) of the content of a
site on one page (the first page) ensures that at least all of the content
deemed relevant is in one place, and the reader has it loaded on their device
within a few seconds of visiting the domain or sub-domain.
Second, from the perspective of the site author or writer, it is much easier
to create content that is cohesive using this one-page method. As the headings
that are created for the contents section, it can be seen how they fit together.
If it is determined that
they don't, a separate section can be created for them. Then, as the
eye scans down the page, the narrative flow can be followed. Rather than
allowing the reading to parachute land in and expect the page to stand on its
own, the context into which that page of information fits is provided. This
removes the weight of having a lot of duplication from one page to the next.
Menus, headers and footers do not have to be recreated for each page, and all
the cached pages do not have to be updated if there is a small change in the
duplicated content.
Finally, it is easier for the site author to provide the
background that is relevant to the content they have created. Who are they? Why
are they writing thi? What is their background and experience as related to this
content? Having this information available in the same context as the individual
nuggets of information provided gives the reader an opportunity to know with
whom they are engaging, improving the chance that they will remember the author,
rather than vaguely remember a page they visited in the past, quickly grabbing
the bit they were looking for, and then scooting away.
It may appear odd that designing a model for a self-sufficient community
would end up influencing the view of a web page, but consider that most sites (I
would assume) are built with the entire world as their audience. How much of
that world's population is going to view an individual author's site? In fact,
we know the answer to that. Very few. Using what is called "The Pareto
Effect", deduction shows that views for the sites of individual's tails off
sharply, so that most of these types of sites can expect to receive little
traffic. Even if the content is stellar, inertia keeps viewers on sites they
already frequent, especially if these sites are aggregators, incorporating
content from a vast array of people and organizations. That shifts the focus
from strategies that allow the site to bubble to the top to something else. They
are valid strategies, but the objective is impossible. Eight billion people
can't be at the top, when the top contains ten spots.
This leads into a motivation for thinking about how to form resilient, self
sufficient groups; where each member of that particular group receives what they
need in exchange for performing their role within it. This is nothing new. In
fact, the recent acknowledgement that we are on the territories of First Nations
peoples brings back to mind how they lived. They lived in a group where each
member of that group received what they needed in exchange for performing their
role within it. We call these types of groups "tribes". When in a small group,
where any member of that group can walk over to any other member of that group
and speak first hand, in person, there is little to no need for advanced
communication, that is for example, communicating on smartphones. They are just
over there so why not go over and speak to them, rather than messaging
them on a telegraph like app. All of this leads into a rationale for expanding
the thought space that goes into the view developed for a web page. Do I work
for a large organization where most of the people sit behind desktop computers
all day? If I do, is there a pressing need to develop a mobile view for all of
our sites? Developing a mobile view takes time, and that time takes away from
other tasks the web developer or programmer could be doing.
Thus, thinking about the virtual space we use in the same way we think about
the physical space we use can lead to a much better use of that virtual space.
How much of the screen space of a high definition monitor is wasted when viewing
a website designed with a "mobile first" emphasis? Much of it. How much relevant
information can be displayed on a monitor with a resolution of 1920x1080 pixels?
A lot. This means that the web page can be designed in a way that gets the
job done rather than pandering to people who will happily consume content
they had no part in creating, and provide nothing of value in return.
Writing content should create value for the author over time. One way to
monetize content written is to encapsulate it in a book format, and offer that
book for sale, through the available means of distrubition: print or online.
However, the experience of online writing has shown that most, if not all,
content written other than for a book directly is difficult to format as a book,
even though it doesn't have to be. The continuous, one page format used here is
shaping up to be more conducive to eventual publication in book format. The
markup is clean and minimalistic, and the creation of a Table of Contents
synchronized with the content, as it is being written, helps to ensure a more
orderly book structure than would otherwise be acheived by writing posts which
aren't sitting side-by-side, or in a linear format. A book published in the EPUB
format is essentialy an encapsulated website. That is, HyperText Markup Language
(HTML) and CSS (Cascading Style Sheets) is used to format it. As the markup and
styling is done using the same set of protocols as for the web, it should be
expected that there is a relatively short distance between writing for the web,
and using that content to create a book. Running through this process a few
times is expected to make it easier to figure out what the workflow should be
for the standard website.
To meet needs, we need to know what those needs are. For any system
with more than, say, a hundred items, some type of classification or
organizational system is needed. This is where information becomes data. Data is
being defined here as organized, or atomized knowledge. That is, the
context is stripped away, leaving the bare facts. When a well organized dataset
is used properly, it can lead to improved knowledge and better informed
decisions. It could be viewed as a wall of fasteners—nuts, bolts and
washers—where the different sizes and types are all arranged neatly, ready to
be used for any project that needs doing. It is work setting up, but once ready,
makes subsequent jobs a whole lot easier. In fact, many website platforms use
this approach. The text of a post is stored in a field in a database. When a
page is requested, a call is made to the database, the text is retrieved, and
then assembled as part of a web page. While convenient for websites with more
than a hundred pages, it is an overly complex system for smaller sites. There is
vastly more factory there than there is what the factory produces: easily on a
scale of 1,000 to 1.
To reiterate, this portion of the websites inverts the signal to noise ratio,
providing almost pure signal, and eliminating almost all noise. This allows the
focus to be placed on topics more interesting than the mere creation and display
of text: activities that should be nearly trivial now, three to four decades
after the creation of the public web and the specification of the markup used to
display this text (Hypertext Markup Language or HTML). In other words, with a
small amount of knowledge of this markup language (to be included below), much
of the functionality provided by platforms using tens of megabytes of codes can
be reduced to a simple text editor. Simplifying the creation and display of text
to a robust and resilient method, allows the focus to shift to tasks at which
the computers are better at than humans. This includes keeping track of stores
of products, which includes food, who needs those products, and how to get them
there.
The multi-person system (MPS) schematic described above has been developed on
this planet (Earth). However, from the persepective of an outside observer,
we are there, whether or not the outside observer exists. In
other words, the familiarity we have with our surroundings--and the surface of
this planet--is solely our familiarity. If we were to go somewhere else, we
would become acquainted with that place in due time and it too, would become
familiar. It is all a matter of perspective. Likewise, if we are ever to go
somewhere else (that is, off planet), what we have with us when we
leave, will be with us when we arrive. This means, with a few moments of careful
thought, that there is nothing that prevents any one of us--or a group of us,
for that matter--from doing what it takes to act as if one day, we will
be there, rather than here. It is rather like going on
vacation. Whatever you pack when you leave is there in your suitcase when you
arrive. Its what you got, and it isn't easy to change it for something else; at
least not all at once.
It is with that in mind (along with the need to live sustainably on this
planet, of course) that I have been proceeding over the past five and a half
years. In software, one method is to release early and release often. This
results in a lot of updates, and some of these updates may be upsetting for the
end users, especially if there is a radical change. That radical change may be
necessary, so I have felt it may be better to release a more mature version
late in the hopes that radical changes will be eliminated, if not
greatly reduced. Examples of changes include: names, numbers, placement, storage
protocols and text and data presentation. It may even include an operating
system. Getting this all "right" (or as close to "right" as physically possible)
is better than running out of the gate with pieces falling off here and there,
because the focus on one aspect results in the neglect of another part of the
project.
With the above said, the following short description has been refined to: "A
model for a self-sufficient multi-person system has been developed that places
complementary trades and professions on an 10,000 m2 (100 ha) campus
like setting. Individual properties each have space to perform supporting work,
find solutions to some of the problems facing humanity today, refine existing
models and innovate. Intended is a transferable synergy that sustains the people
living there and overflows to the surrounding communities." Two main shifts have
occurred since the original schematic was created about five and a half years
ago now. The first is that one workshop zone per cluster has been changed to a
cafe like meeting area, so that the professionals, trades people and others from
each cluster have a place to meet, take a break and discuss their work during
normal working hours, which is within an easy walking distance from their own
workshop, but distinct from the commons area for each cluster and the central
commons area (defined elsewhere). The second is to shift the focus from the
thoughts arising from the use of the word "community" (or even, "colony") to the
more abstract and utilitarian term: "multi-person system". This is so that the
ideas and methods developed are more portable. As a result, the model for this
system is being moved from: ec01.earth3300.ca to: mps.earth3300.ca.
Off-Planet Potential Provides Motivation for Refined Systems
When a system is operating and in use, there is no time to refine it, or make
structural
changes. That needs to happen during the "down time". This "down time" could be
overnight, on a weekend, or it could be during the off season. In many cases
this is during the winter. Because of the ubiquity of the internet, software
developers assume users almost always have a functioning internet connection,
and so they push updates on a frequent basis. It is rare for a rock solid,
stable set of software to be available, that is expected to run without
maintenance for a substantial period of time. There is one exception I am aware
of and that is the Debian operating system, which will be dealt with in the
section below. Writing code for use on servers--which must have a high
uptime--results in code that is robust and stable. This demonstrates that code
can be written which requires few updates, because of the way it is
written.
The previous section (specifying an off planet potential) provides a motive.
There is no motive if the code can be updated from here, because it all exists
at the end of a high speed, reliable internet connection. If an update breaks
the system (because an error was introduced), another update can fix the
introduced error the next day. Since much of this is invisible to the user, the
programmers, and the systems in which they work, are not pushed to being ultra
careful in how changes are made. This changes when the code at the receiving end
is disconnected from the sending end. At best, there may be a lower bandwidth
connection. Regardless, remote debugging is difficult. It is better if it
doesn't have to happen.
An example of a simplified system is this page. At first, it may appear
clunky. There is only a single vertical column. There are no pages (at least at
the time of writing) to click to. Once loaded, everything is there. Even the
underlying text is (supposed to be) formatted in much the same manner as the
styled text, so that it can still be read. Try reading the source code of many
pages. In many cases, it is difficult, if not impossible. Yet, the format for
this page still allows it to be more elegantly styled. That is the beauty of
CSS. It was designed so that the same underlying markup can be made to look
remarkably different by making only stylistic changes.
Experience shows that a refined, stable system--operated by a trained,
experienced individual--performs, for the most part, regardless of conditions.
This is the way it should be. Diesel tractors are designed to be reliable and
operate for hour after hour with little maintenance. Most of the tractors we had
on the farm did that (with one exception). After having experience, however,
with a tractor of lesser quality, we had learned our lesson, and didn't go back
to that brand. That is the difference for quality tools and equipment. When they
are well designed, they are a joy to use, and may very last ten, twenty or even
thirty years before needing replacement. This dampens the effect of the initial
purchase cost. The tool isn't acquired to make the owner feel good about owning
it, it is purchased to accomplish a job, and that job is part of a larger
system. In this context, having a professional like an engineer or architect as
part of the system raises the bar. It moves the potential of the whole system
from a potential of "Average" to a potential of "Very Good" or "Excellent".
The Development of an Operating System Provides a Tested Example for the
Development of Complex Systems
The Debian operating system is described by the following:
Debian is one of the oldest operating systems based on the Linux kernel. The
project is coordinated over the Internet by a team of volunteers guided by the
Debian Project Leader and three foundational documents: the Debian Social
Contract, the Debian Constitution, and the Debian Free Software Guidelines. New
distributions are updated continually, and the next candidate is released after
a time-based freeze.[1]
This an excellent segue between the experience the typical user has with
software what actually goes on to produce it, and a template that could be used
in the design of a multi-person system, which is essentially a human version of
an operating system. In other words, an operating system has a suite of
functions to perform, and this diversity of functions is similar to the
diversity of functions required for a self-sufficient community (a multi-person
system).
To interpret the preceding, notice that the quote includes three documents: a
contract, a constitution and guidelines. The constitution would provide general
guidance, the guidelines more specific guidance, and the contract would be a
document to which those working on this OS would agree and bind themselves. This
is essential. A constitution by itself is not enough. Guidelines are not enough.
A contract is required to ensure that those who are part of the network that
develop this operating system agree to work within the framework established by
the principles and the rules. This defines those who are in, as well
as those who are out. Not everyone can do it, and its got to be done
right.
The term "hyper-specialization", as being used here, is defined as an
individual (or group of individuals) that are so specialized in a particular
task, that they would not able to fend for themselves, if they had to. That is,
they rely exclusively on others for food, drink, shelter, transportation and
clothing, and would be lost if they ever had to obtain any of that on their own.
Unfortunately, it appears as if our culture has been heading that way for the
past generation. Hands on skills have been on the decline.
While virtual and keyboarding skills had been increasing up to about ten
years ago, it appears that even using a full sized desktop or laptop computer is
on the decline. This is difficult to tell, as people walking around with
smartphones, doesn't mean they don't use laptops or desktops at work or at home,
yet I don't see many even carrying laptops around, or using them at a coffeeshop
anymore. It really does appear as if many rely exclusively on their smartphone
to interact with the virtual world brought digitally to them via cell phone
towers or their home wifi. What would happen if a person walking out of doors
with their smartphone, wearing a pair of sandals, shorts and a t-shirt, had
something happen to them? They could call someone, sure. But that is about it.
And if the battery went out, away goes all of that functionality that this
device had been providing.
While it is easy to point the finger at someone else and say "It's their
fault", an adult is responsible for themselves. Heavy use of existing
technology--to the exclusion of other means of meeting needs--implies assent.
The same technology that is so heavily relied upon, can bring in systems that
are more robust. That is, when taking the "eggs out of the basket" of the
ubiquitous smart device, what shows up are: a camera, an MP3 player,
geolocation, texting, web browsing and video viewer, to name a few. Handy yes.
But all in one device? Not wise.
When the alternative is hard, manual labour, any labour reducing technology
that arrives is welcome. The danger comes when the pendulum swings the other
way. Too much time swinging bales turns into too much time behind the keyboard.
This is when it is advisable for humans to use their advanced cognitive
abilities to find the balance between the two. Not only are we physiologically
designed to interact with the physical environment, we benefit and even learn
from it. Consider how much of the brain maps to the fingers, the thumb and
forefinger especially. Take away the nuanced and intricate feedback the physical
environment provides, and this part of the brain atrophies. At the same time,
the visual cortex is over stimulated, and the frontal cortex (assumed to process
abstract thought) must take over as navigator, replacing the entire body with
all of its senses as a feedback mechanism. Some are better at this than others,
but should those who aren't, and don't understand how advanced technology works,
rely on those who do (or at least part of it)?
This is the danger that emerges on the other side of the technological
revolution. To be fair to ourselves, we haven't had time to adjust. A generation
is not enough to adapt to a landscape that has changed so quickly. This is why
it is wise to show restraint before adopting such a radical transition. This is
also one more reason why I feel it is wise to develop a community model that
includes low tech fallbacks that remain an integral part of daily life. For
those who are still enamoured with advanced technology to the point of not being
able to consider anything less, consider that our own bodies already display
signs of extremely advanced technology, albeit biological technology. The way we
process the water we drink and food we eat, for example. Even if it is granted
that this the result of evolutionary forces, the result is remarkable, to say
the least. How much are our minds and bodies capable of, when trained properly,
with accurate knowledge? Quite a bit, I would imagine. Thus, looking for ways to
minimize dependence on tech, while acknowledging and understanding it--to
me--appears a reasonable way to proceed.
As the individual progresses through life, the activities that were
appropriate at an earlier age, transition into something else. The grandfather
that was there to provide stability and a sense of assurance is gone, and the
boy that had that grandfather some day may expect to be one himself. At the same
time, the atomic activities that younger individuals are better at (doing
without having to know "why") turn into activities that must have more meaning
in order for those activities to be engaging. The programming that I did ten
years ago remains, and I still find writing code invigorating, but it may be
better to turn my attention on how to advise the next generation, and put bevels
on the facets of the diamond we are collectively creating. To put this another
way, what is behind the screen is a blank slate to those who don't know how it
works. This, despite the fact that the source code is available for the looking.
But not knowing how it works doesn't make it go away. Its still there, and the
fact that it is not noticeable is a sign that it is working the way it should.
Thus, it could be suggested that a strategy for using tech is to understand how
it works before adopting it as a part of every day life.
But, who is going to guide this process, to ensure it happens? The human is
wired to do the least work possible. If technology is given to me that allows me
to go weeks on end without disruption, a habit of reliance will be developed,
and there will be little incentive to figure out what is going on behind the
screen. The same goes for the food we eat. A delicious breakfast sandwich
appears before our eyes. All we have to do is say the names of the ingredients,
indicate whether we would like it toasted or not. Mayo and salt and pepper? Yes,
please. Wave a card over a terminal or hand over some cash, and we are happily
munching on something that would otherwise be utterly impossible if we had to do
it all ourselves.
Another reason for the heading of this section is that people love to give
advice. The instant information fix the internet provides turns the otherwise
clueless person into a savant. Have a problem? We have a video for that. "Why
don't you do this?" Etc. The only person who knows a person is themselves, and I
would hazard to guess that many people don't even know themselves that well,
despite the millenia old admonition, "Know thyself". When I look at another
person, all I see is what is on the outside. What I see may provide a few clues,
but beyond that is a vast, hidden history. It takes years to get to know a
person well, and after that time, one or the other may move on. It is with these
nuances in mind that this section is being written.
In my experience, the development stages of the early years is better known
through the work of pscyhologists like Piaget, than that of the later years. At
least, I am not aware of any clear cut distinctions that say, "By this age, a
person ought to be doing this." It is possible for development to be
arrested for various reasons. It is also possible that a young person leaps
ahead and does tasks beyond their years. For myself, I had decided when
embarking on this project that I had the skills, background and experience to do
the work I have done. Since no one I know has looked closely at this project so
far, no one I know has the credibility to comment with any degree of knowledge
or wisdom. All that to say, I have done it because I could and because it needed
doing.
Finally, I have to reject the advice that comes along from time to time where
the person looks in and says "Oh, you need money. Why don't you work for a
while, get on your feet, and try again." The answer to that is that I am here
precisely because I was working and have rejected the system. I had a
job, a house, a car, etc., but couldn't see myself working in a system
(industrial agriculture) that sucked the life out of the individual. The problem
wasn't with the employer, it was with the whole dang system. Since that time,
stories others have told have confirmed that conclusion. In fact, the events of
the past few years indicate this is turning around. Part of the reason I have
developed this "one page" format is to make it easy for others to "get their
story out" and do so in a way that is relatively simple (technologically
speaking) and allows them to present clean, crisp text in a linear fashion that
can be turned into a book (essentially by copying and pasting the text). Assent
implies consent, and it is the mass of individuals who will end up
making the difference, as it is with the thousand ants that make up an anthill.
This section will deal with the selection of topics, software, OS and fonts
as well as keyboard shortcuts. As with the development of an integrated platform
bundle (using base WordPress), it has been discovered that the selection of
topics, software to interact with that topic, an OS on which to place that
software, fonts to display information keyboard shortcuts, are interwoven. Thus,
beginning with any one of these, working through how they are inter-related and
making adjustments will result in a better system, without question. That better
system will translate into more precise keyboarding, better productivity and
finally, a better final product. For example, many software packages have
shortcut keyboard combinations assigned to improve workflow. As there is no
broad reaching inter-software standard for these packages, these keyboard
shortcuts end up being different from package to package. In some cases they
collide. When this happens, one or both keyboard combinations need to be
adjusted. This task alone is substantial. Therefore it makes sense to develop it
to a reasonable state of order, and then replicated. The Ubuntu version of Linux
(which I have been using), does not allow for this natively. However, it appears
that Open SUSE does. In addition, Open SUSE appears to be better suited for
development, programming, and tasks related to science and technology, whereas
Ubuntu is better suited to tasks associated with standard desktop use. This is a
non-trivial decision, as once the operating system is chosen it is not easy to
switch away from it.
I made the mistake of being drawn into the presentation of data,
images and text when first taking on programming full-time, rather than
focussing on the topic I began with, which was ecological issues. I had been
interested in why it was we "couldn't make a go of it" and appeared to be using
up the planet in an unsustainable fashion, rather than using our resources
wisely so that everyone had enough. However, that "being drawn into" was
unavoidable, as it is a pre-requisite for being able to deal intelligently with
the reams of information and data available on every imaginable topic. If I
could draw all of that previous work into one subject heading it would be the
word: View. That is, data needs to be presented in a way that it can be
viewed properly, and to do that requires a knowledge of the
underlying mechanics. If that takes a dozen years to learn, it takes a dozen
years to learn.
Having said that, and starting from the bottom of the physical scale, a
selection of topics for which information can be gathered and analyzed are:
Genetics, Species, Geology, Climate and Exoplanets. I omit "Stars" as a
category, as (to me) a star is "only" a means by which a planet exists. Past the
light and energy it provides, there is no functional utility available to the
earth-bound human. The light and energy from a star needs to strike a planet,
warm it, and from there increase the potential for life. Information gathering
includes more than being aware of datasets on these topics, it is
becoming familiar with their content, what it takes to filter that data to a
relevant subset, format that subset, then present it in a written and visual
format, understandable to the viewer. This takes more than saying the words,
"Computer...", it takes experience and skill. Filtering and formatting is what I
did when working for Statistics Canada, using the Table Programming Language
(TPL) to do this. The tables created were used for data verification and then
publication.
The process of data storage, retrieval, filtering, formatting, analyzing and
presenting follows a relatively predictable path. Although much data is
available online, and some of that is proprietary, some is open source
and so can be downloaded freely and then stored for local, offline use. This is
where the other aspects of the programmer's "toolkit" come into play, including
the workspace module. Offline storage requires secure, powered equipment in a
known, stable location. It can't be expected that top-end, professional grade
work is done on a laptop in a coffeeshop. Fortunately, today, terabytes of
storage is available for a reasonable cost. Once the path is set, it becomes
easier to travel. It can also be seen that a careful, wise selection of topics,
coupled with the right type of analysis and presentation has a high value.
Consider that desktop grade computing power can begin to tackle planet level
modelling, in a space of less than 103 feet squared, and the
potential begins to be visualized. In fact, it may be detrimental to have too
much space in which to work, as a compact, well organized layout has
benefits in that everything which is needed is near the worker.
Selecting which software to use on a professional desktop or laptop computer
is a skill learned over time. And--over time--that software changes, so that a
choice made early on may change over the course of 10 years. Once that software
has been selected it needs to be installed. Once installed, it needs to be
configured. If there are a dozen applications installed, each with thirty
configuration options, this adds up to 360 configuration options. It was noticed
that a significant part of a WordPress setup dealt with configuration, which
included the selection and installation of plugins. In addition, it was also
noticed that all the configuration options were stored in the database,
or in the configuration file (wp-config.php). Finally, it was
discovered that the process of installing a new WordPress site, could include
writing a pre-determined configuration to the database and to the configuration
file. This then shifted the setup from manual configuration to automated
configuration. These files are available for inspection[1].
The same process holds true for the selection, installation and configuration
of software on the desktop or laptop computer. An advantage the open source
based Linux Operating System (Linux OS) has over computers based on proprietary
software, is that software can be downloaded and installed from universal
repositories. Once downloaded, they can be configured and used. The majority of
applications needed--if not all--are available free of charge. This makes
setting up a computer for personal or professional use much easier. However,
although the software is available freely, the knowledge required to
operate that software isn't. This nuance (as critical as it is), is
easy to overlook when first encountering open sourced software. It is downloaded
and installed, and then the rocky crags appear. In depth knowledge of an entire
field is required to make it do what it can do. Few people have the time or the
skill to learn these applications to the depth they could be learned. This
includes processing images, audio and video, not to mention databases and the
creation of the queries needed to extract data from them. Once again, work in
the context of a carefully set up contractual network can take these bottlenecks
and move past them.
With emphasis placed on the creation and maintenance of content online, the
selection of the Operating System that runs the device being used to
create this online content is de-emphasized. The exception is
formed, however, from the disciplines which require processor and memory heavy
applications. These include engineering, architecture and landscape design as
well as image, audio and video editing. It is for these uses that a careful
selection of Operating System is made. iOS is a strong choice for graphics heavy
applications, and has been for some time. However, the starting point here is a
Linux based environment, first, because the author already possesses a fairly
deep knowledge of this system, and second, because there are no high up-front
costs associated with purchasing and setting up multiple machines. Older desktop
or laptops can be taken and given new life with a fresh install of the chosen
Linux OS.
The second part of this discussion centers around how easily a given Linux OS
can be replicated. It has been found that OpenSUSE has built-in capabilities to
replicate itself, although this hasn't been tested. At the time of writing,
OpenSUSE has been installed on only one machine by the author. It is an older
machine, and slower; thus it isn't a decent starting point to put this OS
through its paces. The author has used Ubuntu for some time now. It
does everything that needs to be done, and there are few to no issues to find
the software needed for the tasks performed surrounding programming (including
editing images, audio and video). The one exception was the program Celeste.
This takes the abilities that Stellarium has (which mimics the
capabilities of a planetarium) to view stars and planets from the perspective of
earth and moves up one order of magnitude to allow the viewer to virtually
travel to those stars and planets to examine them from up close (to the
extent that imagery of sufficient resolution is available, of course). When
investigated a number of years ago now, it wasn't easy to install Celeste on
Ubuntu. However, Celeste has been installed and tested on OpenSUSE. It
ran, but due to the low powered computer it was running on, performance wasn't
optimal.
It is because of critical decisions like these that working in a contractual
and technically supported environment has the potential to return more than
invested. Obviously, navigating the known universe in a virtual sense is a
precursor to navigating it physically. If all that it takes is a newer computer
with reasonable processing power and memory to make that happen, that is more
than ample reason to take that route. Further, as it has already been stated
that geology, climate and exoplanets are valid topics for examination, porting
our knowledge of earth-based geology and climate to other planets is a small
step. The incredible earth-based diversity of species and plant-life may some
day be the seeds for life on currently barren but otherwise life-supporting
planets. It doesn't need to be said what value that potential may hold for our
future.
Keyboard shortcuts are keys--that when pressed together--perform a function
that would otherwise take a sequence of mouse moves and clicks; thus the name.
There are several layers to this feature. The first is discovering what they are
for any particular program. For example, Ctrl-C is commonly used to copy text
and Ctrl-V to paste text, and Ctrl-X to cut text, tasks used in Word Processing
programs going back to their initial days. Image editing may have a set of
shortcuts common--more or less--to most image editing programs. The same goes
for video editing, and then audio editing. When the application is in focus and
the keyboard shortcut is pressed, the combination for that particular program is
used. In addition, the desktop has its own set of shortcuts. These can be used
to switch between windows, maximize or minimize windows, or tile them on one or
the other side of the monitor. Within the context in which they are being used,
these shortcuts are convenient. Historically, they were used more in the past,
with a keyboard designed for technical users in the 1970s, having many shortcuts
available, as this was in the days before the mouse or touch screen.
Knowing this much, the unwary user may take the time to learn shortcuts for
their favourite programs, and the desktop environment on the Operating System
(OS) they are using. A few years may go by, and then they may notice, "Hey, that
keyboard shortcut conflicts with this keyboard shortcut!" (This happens when an
application shortcut conflicts with a desktop shortcut). A second type of error
occurs when different tasks are performed by the same shortcut in different
programs. The shortcut for the one program is used, but the other program is in
focus, resulting in unexpected behaviour. Finally, a similar program from
another vendor may not have a shortcut assigned to a function. The shortcut is
used, but hasn't been defined, so nothing happens. All of these types of errors
are an inconvenience at first, but can become aggravating, especially if the job
has got to get done, but the wrong things happen. For those familiar
with designing graphcial user interfaces (GUI), it will be recognized that these
shortcomings have room for improvement.
The author has already put a number of hours into recording the shortcuts for
common programs used in the programming, writing, audio editing, image editing
and video editing workflow. The image shown above is base keyboard layout for
the Periboard 409. The full set of key combinations is a 3x3 table providing
nine variations from that base set. Once the default keyboard combinations are
known and recorded side-by-side, it can be determined which ones conflict and
thus are candidates for being re-assigned. If it were only for the reasons noted
in this section, this set of random, conflicting or unexpected behaviours ought
to provide sufficient motivation to create and replicate a standard computer
setup. This setup would include the Operating System (such as Ubuntu[1] or
OpenSUSE[2]), the desktop (such a GNOME), the programs (such as LibreOffice,
Atom, Audacity, Gimp and KDENlive) as well the specific keyboard (such as the
Periboard 409-U) and model being used. Finally, it is better if the shortcuts
chosen are easier to press rather than more difficult. It is easier to press
Ctrl-Alt together, than Ctrl-Shift due to the relative placement of the keys.
This exercise appears to be trivial at the outset, but isn't trivial over time.
If the same task can be performed with 100% accuracy in 15 minutes, versus at
75% accuracy over 25 minutes, obviously the first is preferable. Refining the
set of shortcuts used across software applications, but for a particular OS,
desktop and keyboard is expected to work better in a technically supportive
environment where the users are informed of the benefits of this refinement and
have an active role in the process.
It is often the details that derail a project. Consider how easy it is to
write text, and how difficult it is to format that text properly. This is why
many prefer to use pre-existing solutions for blog posts, articles and the like.
It is all fun and games until a finer detail crops up. I would like to add an
anchor link that looks like a broken infinity loop or other icon that is not on
the standard character keyboard or font set. What to do now? I would like it to
look professional, at the same time, I want to make sure there is a graceful
fallback so that if these additions are ever stripped away, something is there
that still works. A strong option is to add what is called a "sprite"; a
transparent image with the icons used on a site placed in a grid pattern on it.
The one desired for a particular location is specificied by styling. It works
well, is fairly robust and should work most if not all of the time, as long as
the styling and the sprite PNG image are present. But what if they are not? If I
am beginning a project and don't have these add-ons present, I want to keep the
flow of writing going; not get dragged into finding the right icon, opening up a
graphics editor and writing CSS for that image. I simply want to keep typing.
In this case, it may be better to begin with a text based solution,
using characters that are on a standard keyboard, and then replacing that
descriptive text with the relevant icon at a later date. In this way, the link
will always be visible, and can be typed by anyone with the same, standard
keyboard. This is the approach being used here.
Fonts, icons and glyphs are variations on the same theme. The single
character in a font set is not seen as an icon or glyph by the average user due
to habituation. The characters in the English alphabet have become ubiquitous,
with their etymology left behind. Further, the primary language used in
technical writings--especially programming--is English. I have not yet seen code
written in French, German or Spanish; though it certainly could be. In its place
are translations but the translations are an overlay on top of the base
English language. In particular, the base language of the programming
languages used for web development is English. This skews the thinking and
discussion to an English based framework and paradigm. Sometimes other letters
(such as Greek letters) or words from other languages frame the thought better.
But English is, in general, the base language used in the technical world, in
the author's experience.
Having said that, the letters of the Latin
alphabet comprise only a subset of the available characters. A much larger set
is defined, and these are captured by the Unicode Consortium. What Unicode has
done is to create a correspondence between the glyph--whether the Latin
character "a" or the Greek character "alpha"--and what is called a "unicode
point"--a unique, or almost unique hexadecimal notation that can be used to
refer to that character in a precise manner. It does not proscriptively define
what that character should look like in a particular font, but it describes that
character as distinct from others which may be close to it in appearance. All of
this is necessary because characters stem from language groupings and language
groupings have distinct and historical backgrounds. The result of all of
this--for modern web usage--is that it is possible to create a single custom
font-set, which includes the expected Latin character set; but also common
glyphs which can then be styled to match the font used with that standard
character set. This means that the telephone and mail glyphs, or common menu
icons can be used in a manner identical to characters available on the keyboard.
The underlying representation in the defined Unicode point is identical for each
as a hexadecimal digit.
Character encoding is a low-level process. Much of the time, it isn't an
issue, but if the characters being displayed are off the beaten path, it could
cause issues. The standard the author follows is to declare that HTML pages are
encoded as UTF-8. When creating new databases or tables in MySQL, the collation
is set to utf8mb4_unicode_ci. The mb4 stands for
Multi-byte 4, which uses four bytes instead of two for character representation.
The ci suffix denotes that the collation is case-insensitive.
As this is not the default, or no default is set when creating a new table or
database, this is useful information. The user or database administrator may be
inconvenienced if other settings are used, particularly if a lower multi-byte
setting is used and this then needs to be changed in the database. Although
there may be no untoward effects, it is preferable not to have to make low-level
database changes. It is because of less encountered details like this, that
being part of a system that includes a technologically inclined person begins to
provide a return for the investment of having to work with other people in a
systematic manner.
Plain text is the simplest possible format that can be displayed in a web
browser. It is easy to generate and easy to display. Despite this fact, it is
rarely used. Plain text can created be typing into a text editor. However, to
create readable paragraphs, the lines have to be formatted to a certain length.
The default line length is usually 80 characters. If new lines are not added
every 80 characters, the line will scroll of to the right on the page, making it
unreadable. In addition, if lines of the right length are created as the text is
typed by pressing the Enter key, a later edit on that paragraph
often results in a jagged set of lines; one line will be very short. This can be
fixed manually, but is easier if it is done programmatically. In the Atom editor
being used to create this site, a package called "Line-Breaker" has been
installed to perform this function. A paragraph with lines of the correct lenth
is created by pressing CTRL-ALT-ENTER, after selecting the text to be formatted.
For the web author used to writing in an online editor, the above paragraph
may appear overly precise and low level. Why would this be needed? The answer to
that has to do with creating what I am calling a "Low Tech Fallback" for each
piece of the puzzle, as the various processes of a self-sufficient community is
built up. Text is required to communicate--of course--and the simplest possible
means of formatting that text is sought. This is what the result of that
objective is. Consider the task of the typist of the 20th century. A paragraph
is typed on paper and the paper is removed from the typewriter. It is edited. A
single word or a few words are changed. This required the entire page to be
re-typed. What a relief, then, for the computer to come along and prevent the
need to re-type an entire page, just to fix up a few words! Yet this simple
advance has created a whole host of complex problems. The typical website using
an online platform is easily more complex than a simple text editor by a factor
of a 1,000. To understand this complex machinery, skilled technicians are
required. They, of course, like to earn well for their expertise. The typical
small business owner or web author can't afford to pay them what they are worth,
so makes do or struggles along, spending countless hours trying to figure out
how to format text for the web. Which system is now more simple? The one where
text is typed into a text editor and a single formatting package formats the
line lengths.
Now, who is going to set up the text editor to do this? The Atom text editor
is not that difficult to install. It is like many other text and code based
editors. However, it has way more functionality than the standard web author
needs. This is where the replication of laptops containing a standard set of
software developed for an integrated community comes into play. Rather than
trying to "take on the world" with a nifty idea (with slim chances of
succeeding), this method peels off a much smaller chunk. Recogize that each
individual has roughly similar needs and create a set of systems that meets
those needs, while the indepedence of the individual is retained. Taking this
down to the smallest level, the result is a laptop with the "right" software on
it that is made available to those contracting to function as part of this
integrated network. If these people live within a reasonable distance of each
other, providing service and training for products distributed with this method
becomes a lot easier.
Beginning on the wrong foot could mean a perpetual imbalance. Imagine you are
one step away from discovering where the food comes from in the grocery store.
When going there, you see the food, but don't see where it comes from. Then one
day, someone shows you that a seed is put in the ground. From the ground the
plant grows, and from the plant the food is harvested. It is like that with the
web. We use it every day, yet are only a step or two from how it is created. In
fact, as I am writing this, I am making a similar discovery. The base language
used in PHP (one of the common languages used to drive the web), is only a step
away, but has remained invisible to me, because it hasn't been explicitly
declared as such in the PHP language. A similar example is being able to
understand and describe how sounds are formed as we speak. It is done without
thinking, yet describing it accurately is a task only a few can do. I will
describe a low level (but essential) computer processes here as a reminder to
myself and to whet the appetite of anyone wanting to learn and work at this
level. It is a very handyskill to have.
For example, supposed that you have a PDF document and wish to create a video
of it, so that it can be read without intervention or, at most, read by hitting
the space bar to continue or pause. To do this, the document needs to be
transformed into a set of images, and then those images may be used in the
video, simply by placing them one after the other, with enough time on each
image to read through the page. You can search the internet for a program that
will save the PDF document as a set of images (and may very well find one), but
this functionality may already reside on the computer you are using; if you are
using a Linux based OS (such as Ubuntu or OpenSUSE). To do that some knowledge
of the underlying processes is helpful. In specific, with the utility
pdftocairo installed, the following is expected to do what is
required.
This script takes the files in the current directory and selects those which
have the *.pdf extension. It uses pdftocairo to generate a set of
images, with the specified options. The goal is not to explain the specifics of
this application, it is to show that there are low-level utilities already
built-in to Linux based Operating Systems. This example script is set to process
PDFs in the folder in which it is placed and create one image for each page at
the default width, but cropping the extra bottom marin. The images are placed in
the images directory. This script must be used in the Terminal with
the appropriate permissions set. There are many tutorials and reference
documents available online to provide further details on the Linux Operating
System and low level functions like the one shown here.
Working through functionality like this is technical work. It requires time
or it requires having someone nearby who is knowledgeable in this area and could
help with this task. This is the benefit that arises by being part of what I am
calling "a contractual network", where those with specific and complementary
skills in select trades and professions work together to achieve better results
than they could alone. This "contractual network" could occur in existing
cities, town and communities, but is expected to be more easily demonstrated in
a community model that is built from the ground up for this purpose. The best
way to explain something is to show it. Thus, having a functioning
self-sufficient community built from the ground-up and incorporating a
contractual network is expected to be the most efficient way to demonstrate
it.
If you understand the following, you don't need a tech person in the loop,
can set up your laptop as a web server from scratch, and don't need to spend
hundreds, if not thousands on other people or for a web hosting service to do
this for you:
sudo adduser r4 www-data
# Adds user "r4" to www-data
sudo chown r4:www-data -R /var/www/your_site
# Sets ownership of user "r4" to www-data recursively
sudo find /var/www/your_site -type d -exec chmod g+w {} \;
# Group gets write permissions
sudo find /var/www/your_site -type d -exec chmod g+s {} \;
# New file inherits ownership of group
To be honest, I am not sure how many people posing as web developers
understand the above, or if they would remember it and could type it in directly
from scratch. It isn't just a raw understanding of the commands that are
important here, it is also what happens when these commands are entered into the
terminal. Getting it wrong could change permissions for all the sites
running on the device, or change permissions in the wrong directory.
Patience, and attention to detail is required here. These are part of one of the
Big 5 underlying personality traits discovered in the last thirty years called:
Conscientiousness. Certain people have this trait and so would be able to work
through details like this. Others don't and are better suited to other tasks.
The trait is not dependent on formal studies, but may influence those studies so
that those with it end up in professions like Physics, Mathematics or Computer
Science.
The reason I found the above snippet was not due to brilliance, it was due to
being organized and realizing that the directory structure could be used to
better effect. Thus, your_site contains the /site
directory, and that directory contains the /public directory. This
prevents everything above the /site directory from being accessible
online. The /public allows room for an /admin or
/members directory beside it, while explicitly declaring (to both
the site author and viewer) that the material in it is definitely intended to be
publicly accessible. Cranky operating systems that don't let you access the
files you want when you want to access them, also don't allow others to do the
same. This technical obstinacy leads to a more secure computing environment. It
needs to be remembered that any file that you the user can access from
anywhere (after providing a mere username and password) can also be accessed by
anyone else with roughly the same credentials. A more robust interface
will detect if a different computer is being used and offer a challenge
question, but the main site platforms of which the author is aware do not have
this more advanced capability.
A distinction the Linux OS offers, over the alternative, is fine grained
control over file permission and ownership. Beyond this, a lesser known
capability allows a file to be immune from being changed, even if an attempt is
made by the file owner, with the appropriate permissions. This is called,
immutability. When a file or set of files is given this attribute, it
can't be deleted or modified, even by the file owner. The mutability attribute
first needs to be turned "off", and then changed. Moving back to ownerhip and
permissions, it has been discovered that even widely used recommended settings
can be refined and improved upon. The main requirement for a file being read and
displayed online is that it is "readable" by the by the server. As it is the
server doing the reading, these files can be set to 400 which allows them to be
read only by the user. Write permissions is needed only if the file needs to be
modified. In most cases, files do not need to be modified unless there is an
update to the platform. This alone implies that an additional layer of security
can be applied to a website by merely changing permissions for all files to
read only (with no writes) for the times in between site updates.
Arcane technical details require precise implementation. When a configuration
file is stored digitally, for example, a single character for a value often
makes the difference, denoting whether that value is ON (1) or OFF
(0). By the same token, punctuation marks such as colons or
semi-colons often denote the end of a "key" and the beginning of a "value" (with
a colon) or the end of a line (with a semi-colon). It is patterns like these
that the experienced coder is familiar with. Experience that is built up over
years of use, so that looking for details like this in non-functioning code
becomes a matter of habit. Having said that, however, it is not possible for the
human mind to retain detailed knowledge of thousands and thousands of lines of
code. It is not built like that. Code is entered via the keyboard and stored
electronically. Once lines of code or a configuration file are functioning
correctly, they need to be "locked in place". While versioning allows for
changes to be rolled back, I still do not know of a workflow that incorporates
locked files that can't be changed, once they have reached maturity.
One way to do this, however, is to create a physical hard copy; that is, a print
out. A Brother black and white laser printer is an excellent choice for that,
preferable over its inkjet counterpart, because it can sit for an extended
period without the ink drying out. Once printed, the pages need to be stored
somewhere. The same type of organization that goes into structuring a document
to be used as a book, or structuring directories used to contain many files can
be used to structure a set of files to be used for physical storage.
Working through the steps needed to store key parts of a program in a
physical format can reveal how amenable the electronic structure is to being
recreated in a physical format. For example, the directory structure housing the
images, drawings, documents and tables on this site are all contained in a
directory called /media. The /media directory is
underneath what I am calling a "versioning" directory called /020.
Over time, this can be incremented to /030 to allow for a fresh
start and a completely new way of doing things. Should the image, drawing,
document and table directories all fall directly under the versioning directory,
or is it better to retain a root directory for all files of type "media"? If a
physical hardcopy is made and those physical hardcopies are placed in a drawer
or on a shelf, that drawer or shelf could be named "Media", precisely
replicating the electronic version. While this approach may appear odd at first,
consider what happens when a non-technical user comes along. They would like to
have a look at how it works. Clicking through the directories where their
images, drawings and tables are stored becomes tedious. There may be many. But
pointing them to a shelf or drawer where the same files are available for
viewing by pulling out a binder or a file makes the otherwise completely digital
copy tangible. In doing so, it may reveal something that can be improved upon or
a detail that might otherwise be missed.
Not all data is created equal. There are orders of magnitude ranging from the
very small to the very great. Inhabiting these distances are genomes, the
species that the genomes define, the contintents the species are on, the planets
that contain the continents, the solar systems that contain the planets, the
galaxies that contain the solar systems, and the universe that contains the
galaxies. The data is relevant within the context it occurs, provided
that awareness is retained of that context. For example, it is obvious that
the genomes that contain the information that defines a species is relevant,
when the species can be seen. We can see fish, dolphins and whales, otters,
groundhogs and porcupines, algae, lily pads and seaweed, etc. What we
cannot see as readily are the other planets in our solar system. They
appear as mere dots that move through the sky. We also cannot the moons that
orbit the planets of our solar system, without assistance, except for ours, and
we certainly cannot see that planets that orbit other stars in
our galaxies without assistance, though we are now certain they are there and
have been cataloguing them for some time[1]. It is precisely because of the
value of the data that is understood within the context it occurs that
provides a motive for creating a physical hard copy. We want to be certain we
have a record of it.
This section deals with the equipment needed for the programmer. This
equipment will be nearly identical, or identical to that needed for other desk
bound computing work, including data analysis, writing and audio, image and
video editing. The programmers work is text based and so equipment is more
tightly focussed around this work, such as a reliable black and white laser
printer as a first choice, rather than a color printer. Multiple monitors are
used. A minimum of two quality speakers are used for listening to music, video
documentaries and video conferencing. At least one USB microphone of a
reasonable quality or better is available for recording audio and video
conferencing. A second XLR microphone may be available with broadcast quality
recording. A single serve coffee maker is present, with water being filtered by
the reverse osmosis process or available as a known quality spring water. At
least one camera for recording tutorials is installed. The desktop computer has
a processor up to date within the last three generations (i.e. an i5 if the
current generation is an i7) with sufficient memory for processing videos and
graphics editing. External backup is provided with daily automated work related
backups being performed with a manual check, or manual work related backups
performed by days end.
Experience shows that a desk-bound computer is better for dedicated
programming. Keeping a the primary computing device at the desk and locked to it
is more secure, especially if access is limited to the working environment.
Professional programming is expected to involve sensitive or confidential
information, often for clients. Thus, having a measure of security in place is a
reasonable practice to build into the work environment and programming habits.
In addition, it is easier to install (and upgrade, if necessary) higher end
components focussed on the task at hand. Text based programming does not require
a lot of CPU or memory, however, audio and video editing and production may.
More precisely, the results of text based programming may
include analysis of large sets of data, or CPU and memory intensive modelling.
In these cases, the computer can be built with the components needed.
A docking hub is a physical device to which a laptop is attached so that it
can easily use the expected stationary peripherals which a desktop computer
uses. These include multiple monitors, speakers, a camera, a microphone, full
keyboard and mouse. Specifically, certain older professional grade laptops have
a docking slot built in, so that they can be placed on and locked into the
docking hub. Newer laptops have USB Type-C interfaces, for which USB Type-C
docking stations are available. Despite the expected interchangability of this
equipment, experience shows that their is variability. Not all Type-C docking
stations work with all Type-C devices. This is one more reason to assemble a
tested, functioning system that is rolled out across a contractual network, so
that each member of this network can benefit from the technical expertise of one
or a few individuals in this area. This means--at minimum--that the laptop and
docking station combinations need to be standardized for each contractual
network.
The keyboard is an essential piece of equipment when using a laptop or
desktop computer. The layout of the keyboard makes it easier or more difficult
to perform common tasks. These tasks will differ by speciality. Keyboard
shortcuts use combinations of keys which--when pressed together
simultaneously--access a function which could only otherwise be accessed by a
sequence of mouse moves. This type of access speeds up work considerably when
those keyboard shortcuts are known and part of the routine of the desktop
working professional. There is a nuance, however. It is that not all keyboard
shortcuts are standardized, and some overlap. This means that the same shortcut
will perform different functions in different programs. Or, the same shortcut
will collide with that from another program, so that an unexpected effect
occurs. Or a shortcut bound to the desktop environment collides with the
shortcut from a specific program, also producing an unexpected effect. Finally,
keyboard layouts different enough, that it is worth the while to invest enough
time in finding the right keyboard for the set of functions expected across the
range of tasks defined here. All of this, again, supports the notion that
developing a standardized, locked-down workspace with the right equipment set up
in the right way, will result in dividends across the board.
Next to the keyboard, the mouse is the second most essential piece of
equipment with which to interface to a desktop or laptop computer. A mouse
typically has two buttons on the front, with a wheel for scrolling. The
expected behaviour of the left button is to execute a "click" action, and the
expected behaviour of the right button is to have a menu displayed. On some
keyboards, the same context-sensitive menu can be displayed when pressing the
Menu button. This button is available on the keyboard shown above.
A computer mouse can be connected via a cable, wirelessly using the 2.4 GHz
protocol or wirelessly using the Bluetooth protocol. Some mice can be connected
by both a wire and wirelessly, or by both the 2.4 GHz and Bluetooth protocols,
with a switch on the bottom of the device to move between one or the other.
Finally, the quality of the mouse and the ability to track precise movements
varies greatly from brand to brand, and within brands. For a dedicated,
professional grade working environment, it makes sense to ensure the right
brand and quality is chosen for the task. It takes an effort to ensure the
right mouse is chosen and tested for the job.
Monitors are one of the essential pieces of equipment in the professional
grade computing setup. Experience shows that three monitors provide a balance
between enough viewing space to perform the basic tasks of file editing,
communication, uploading and viewing uploaded content online and the physical
space required to house those monitors. The physical break between monitors
provides a hard division between tasks that cannot be obtained by
dividing a single monitor up into sections. That being said, software
can be installed to create panes on the individual monitor. However,
again, experience shows that these are excessively malleable. The layout does
not not stick over the long term (long enough to become used to having
a specific type of information in a specific location). In addition, if a laptop
is being used with a docking station, and the laptop provides the primary
monitor, along with the CPU, memory and storage, when it is removed from the
docking station, the windows open in the other two monitors collapse
onto the laptop monitor. This means that a balance needs to be struck between
the number of monitors used, what is on those monitors as a matter of habit, and
what happens when the laptop is removed from the docking station. This is one
more reason why a standardized, replicable setup has potential. Deciding which
combination of monitors and windows works best for multiple users is expected be
a catalyst to crystallize the precise configuration for an improved
workflow.
Coffee and water are being included in the "Equipment" section as a
placeholder for the nutrition and hydration required by the deskbound
professional. Obviously, not all professionals will drink coffee, but a beverage
of some type is expected. If not coffee, then likely tea. The type of coffee,
the brewing method and the water used are as important as any other aspect of
the environment. A key aspect of including this feature in the work environment
is that the equipment (including the mug) need to be placed somewhere. The
brewing machine requires power, and it is expected that it will be on a separate
breaker from the power source feeding the sensitive electronic equipment.
Kettles and brewing machines typically require 1,000 to 1,500 watts. The
capacity of the 110 V breaker is 1875 Watts. This means that the kettle or
brewing machine ought to be on its own circuit to avoid overloading and tripping
the breaker by which the computer equipment is powered. If city tap water is all
that is available, then a reverse osmosis machine is recommended for water
filtration. The other option is to purchase quality reverse osmosis water, or
obtain water from a local spring of suitable purity. Eating or drinking foods or
liquids that reduce performance by dulling the mind or reducing energy levels
takes the edge off the rest of the design of the workspace. The worker is an
integral part of it, and needs to be treated in a way that supports the ideal of
quality throughout.
The emphasis on clear sounding, robust speakers has declined with the advent
of electronic technology. Even before that, moving sound production from analog
to digital has reduced the timbre and depth of audio reproduction. This can be
verified by listening to vocals recorded in the 1950's versus vocals recorded in
the 1980's and greater. There is a remarkable difference. Having said that,
making an effort from moving beyond speakers built for computers to speakers
built for the studio will result in a better quality sound in the space in which
they are placed. Listening to music is a key part of work. Having the sound as
clear and distinct as it can be is expected to result in a pleasing work
experience. Looking at it the other way, the computing professional will want to
record audio to explain the code and the tutorials they create. When recording,
that audio needs to be listened to for editing purposes, but also for volume and
clarity. Having studio quality speakers in place, removes them as a potential
bottleneck from the equation. Writing top end code, or other types of digital
products is enhanced by top end audio. It keeps the level of expectation high
from the consumer, when the same level of quality is experienced
throughout.
The author has tested the three standard types of microphones used for vocal
reproduction (cardiod dynamic, hyper-cardiod condensor and broadcast dynamic)
and three sizes of USB microphones from the same brand (the Blue Yeti, Blue
Snowball and Blue Snowflake). Of these, the one prefered so far, for authentic
speaking-voice vocal reproduction is the broadcast dynamic. A strong second
choice would be the Blue Yeti, when placed correctly in an acoustically neutral
environment. The hyper-cardiod condensor was accurate, but crisp, and the
cardiod dynamic tested was designed for on-stage vocal reproduction. It was
adequate, but not tuned specifically for the speaking voice. The main
difference between the broadcast mic and the Blue Yeti in terms of physical
setup is that the broadcast microphone is designed to be placed on a boom,
whereas the Blue Yeti--due to its size and weight--functions better when placed
on its stand directly in front of the individual speaking. As the configuration
being created here is for the computing and deskbound professional, it is
assumed that the main activity will be at the keyboard. The Blue Yeti needs to
be placed in front of the keyboard for best effect. This means it gets
in the way when not in use. The broadcast microphone was setup using a desktop
stand, a counterweight and a boom of reasonable length, allowing it to be swung
out of the way when not in use. These are not trivial considerations. If
recording audio is going to be a key requirement for the workspace
configuration, the microphone doing that recording has got to be ready
when needed, and unobtrusive when not. Critically, vocal audio recording is such
that it works best when the microphone is within a handspan of the speaker's
vocal apparatus. This includes the diaphram, as well as the throat and mouth. A
third option for unobstrusive--but authentic--vocal recording is a suitable
lapel microphone. Of these, the broadcast microphone is the first choice.
Recording video with a camera in a desk-based environment requires more than
placing a camera on top of the center monitor, sitting down and pressing
"Record". This will get the job done... with a 1 out of a possible 10. Why
bother thinking about SEO, writing crisp text, getting a haircut, dressing
neatly, etc., if all the entrepreneur is going to do is point the camera at
themselves...and everything behind them. Despite the glaringly obvious lack of
professionalism demonstrated with this method, nine out of ten videos are
published using this slapstick approach. On a budget, and with limited options?
Turn the desk around so the camera is pointed at a wall. The bulk of
distractions are removed. Hang a black tablecloth on the wall. The focus then
turns to the moving object in the center (the person doing the recording).
Install a sidelight. Depth is created. Etc. Video tutorials abound for
improvements like these. The focus here is creating a workspace where this is
already done. Having a cube of 10'x10'x10' or less allows the configuration to
be created once, then replicated. It isn't difficult, once starting out on the
right foot. The difficult part is thinking in this manner. It is a conceptual
shift. The framed cube provides a ready means for installing lights. The side
and fill lights have a structure on which to hang. Not only that, the placement
of this equipment is meant to be stable. That means it doesn't have to be taken
down if moved. These hints alone ought to be sufficient for the savvy
entrepreneur. A nice touch would be adding one or two cameras for a back view
and side view, with due care being taken to avoid recording passwords on a sheet
of paper, or other sensitive information. Having said that, moving from a
desk-bound professional whose main source of communication with their client are
the digital products they create, to one who extends that effort to their audio
and video communication is a benefit a standardized, replicable workspace can
provide.
A key principle
in the model of a self-sufficient community is to have a low-tech fallback in
place for critical systems. It is being learned that redundancies
result in extra equipment which can get in the way. Thus, an over-arching
objective would be to retain system functionality while reducing redundant
equipment, processes and digital copies to their essential minimum to prevent
clutter and the question of which piece of equipment or digital product is
authoritative or canonical. This is not a trivial requirement. Consider what
happens when a backup is made of a digital product. Work performed past the
backup is unique and must be backed up as well. If there is a disruption and a
roll-back is required, what has happened in the interim may only be known by the
person doing the work. Thus, before a backup is taken, emphasis on the work
itself, the model it is following, and the daily work routine is expected to
hold the greatest value. Past that, judicial, known backups are
relevant, in the context of the primary work. Automated backups may be
useful, but only in the context of the individual who is aware of what they
hold. Without this nuanced requirement, a year could pass with the expectation
of worked being backed up, only to find a key parameter missing and the expected
backup missing. The one-to-many model of third party hosting solutions mediates
the risk, but a improved tech person to user ratio is expected to produce better
results with digital products having greater solidity. External storage
is being defined here as digital work stored separately from the device on which
it is created. It should be noted that online applications store
creative work online by default, with offline storage being offered as a an
option. The risk of using offline external storage as the default is that the
storage medium (as simple as an external drive) could be lost or compromised
more easily as it is physically compact (palm sized or less). However, securing
the entire work environment in a physically protected working cube of
10'x10'x10' or less, reduces that risk as the external storage would have a
dedicated spot on, in or under the desk. In addition, having the working cube in
a protected or semi protected area on owned property will prevent the
casual passerby from coming near valuable digital equipment and the creative
work produced on them.
Limiting the number of people an individual or group interacts with is
essential to success. This includes potential markets for distribution of goods
or services. Without restrictions, exponential growth will lead to more
commitments than are manageable. This, in turn, will lead to collapse. Realizing
this basic system principle in advance--before and while a system is being set
up--allows fine tuning to occur; so that what is done, is done well.
Matching the supply (of a product or service) to the need is a missing link in
the current economic system. Computer networks are fully capable of balancing
the supply/need equation for better efficiency and performance across the board.
Stock levels at branch stores can be seen by customers, so that they know which
store to go to for a specific product, wish lists can be created by customers so
that they can remember what it was they would like from a particular store, but
that is where the algorithm stops. Combining a limited population with improved
matching capabilities is a service definitely worth exploring. However, here is
where the path begins to undulate. On the one side are communal systems that
have proven faulty. Large, continent sized populations have not proven the
effectiveness of a centrally controlled system. Further, small
communal-like systems, have the appearance of being too "provincial". There is a
certain advantage to anonymity. Being able to go for a meal or by an item
without that simple task being over burdened by social nuances is refreshing.
Something in the "habital" zone that is proven effective over time needs
developing. This is where a physical model that is designed to incorporate the
best from both sides of the fence has the potential to create a path, where
before there was only vagueness and blank space. Once the system is tested and
refined to satisfaction, it can be released into the general population with
greater assurances that it will work as intended.
In a survey of twenty Ontario cities conducted by the author in 2016, the
community with the highest population had 136,000 and the community with the
loweset population had 22,000. The author has been in many (but not all) of
these communities. Since that time, a refined strategy has reduced the arbitrary
starting population of an existing community to 25,000 or
less. No minimum population is being defined in this section, as the
minimum population depends on transporation corridors. Ample, rapid, and
convenient transportation between points is what makes the difference. It is
assumed that the pathway requiring the lowest amount of energy will be used by
individuals, whether that be virtual or physical. Thus, driving to a store,
where that trip takes 10 minutes will be preferable to walking 10 minutes to an
identical store. Even though the 10 minute drive consumes more total energy, the
energy is external to the individual, and thus requires a lower
personal energy expenditure than the 10 minute walk. To be comparable, the
identical store would need to be within, say, a three minute walk to match the
perceived travel (and energy expenditure) of the drive.
Based on actual experience, a city of 30,000 inhabitants (round figures) with
big box stores 4 kilometres apart was uncomfortable to navigate when walking, as
that walk took about an hour in real time. Hilly terrain and heat added to this
discomfort. Two towns of about 15,000 (Towns A and B) were easier to traverse on
foot, but there was still a walk of about 1.5 km from the town core to the edge,
where the bix box stores tend to be placed. A bus was available in town B, but
it travelled in one direction only, on a one hour schedule. The third town (Town
C) being experienced (with 20,000 inhabitants) has a population of about 5,000
more than Towns A and B but has a more comfortable walk to some of its
amenities. It also has a single bus route, but this travels in both an easterly
and westerly direction, taking an hour to travel one way, then the second hour
to travel the other way. Thus, experience with towns or cities in the 30,000
population range or above, show that they are not comfortable to walk, which is
a key requirement for having a low tech fallback (i.e.that they are walkable).
A topical focus for a marketable idea or a functional focus for a marketable
product are two ways to reduce the target population from an unmanageable
"everyone"--spread around the globe--to a more manageable number. Even when
delivering a digital product, the behind-the-scenes infrastructure is very
different for the individual marketing an add-on to a software platform, and the
organization delivering that software platform which is used worldwide. In the
second case, dedicated personnel are required to ensure servers are operating as
intended. In the first case, an annual web hosting package can be purchased for
the cost of a mid-priced fall jacket. They are not comparable. A third way to
reduce the target population to a manageable number--while still retaining a
focus on a specific product or service--is to limit the geographical area being
serviced, while ensuring that the individuals in that population are part of
a system where their needs are being met, within that system. It is
expected that the only way a limited number of people can actually
function in this manner is for this system to be built from the ground up, with
individuals contracting to be a part of it. The built-in habits and inertia are
simply too strong for an existing community to adopt a different system, expect
a smooth transition, and not have anything break. Regardless, what can
be done is for a contractual distribution model network to be set up, so that a
large enough group of people, in a limited enough geographical area, can examine
how that might work for them, and play around with it, without having it
drastically affect their lives.
The business owner alone has many decisions to make. How much of a product
should be supplied? At what quality? At what price? Too low a price and not
enough profit will be too high and the product may not sell. There is also
marketing to consider. How many dollars should be spent advertising? Employees
want to be paid well and have benefits. All of this needs to be balanced and is
difficult without extra information.
Likewise, the consumer has many decisions to make. How much should be
purchased when an item comes on sale? What quality level can be afforded? If
more is bought at a lower quality, but that product does not serve the need,
what good is it? How much effort should be put into sourcing common goods? How
far should one travel to get what one needs? What happens if there is a downturn
in the economy and a job is lost? These are all concerns from the consumer’s
point of view.
In some industries, there is a link between the supply and demand. This is
the case with taxi dispatchers and dispatchers for emergency services. They
moderate who needs what and serve as a link between those requesting the service
and those able to supply the need. In many cases in the private sector, the
consumer is left to fend for themselves to find what they need. Only in special
cases may they have a person to help them. This could be a friend, a child, a
parent.
In general, the economic system is set up so that suppliers and consumers
respond to economic based indicators. If the price goes up, less will be
purchased. If the price goes down, more will be purchased. It is up to suppliers
to gauge how much and of what product to supply. The consumer is free to buy as
much as they wish, providing they have the funds to do so. A downturn in the
economy could result in less money being available to purchase goods and
services and thus make things difficult.
For suppliers that are able, they may obtain help to determine what product
to sell and at what price. More commonly, they will hire a marketer to help them
advertise their goods, or advertise “in house”. The assumption is that the more
they can convince people to buy their product through greater awareness or other
strategies, the more successful they will be and hence, the more money they will
make. This approach often ignores other businesses in the same sector or
ecological concerns.
By the same token, a wise consumer who is able may obtain help from
neighbours, friends or family members to help them determine where to buy, what
to buy or how much to buy. They can research the product and are free to
purchase from anywhere, as long as they are able to afford it. This approach
works as long as the consumer has the necessary skills to do so and is able to
afford what they need. It ignores the business owner as there is no commitment
to purchase from anyone in particular if they choose not to do so.
When the supplier begins to take a holistic view and seeks to obtain balance
between their own needs, the needs of other business in the same sector, the
needs of the consumer and the needs of the environment, then long term
sustainable practices can start to be found and implemented. There can be
movement from competition to cooperation. However, a business only can control
their own actions. Other businesses and consumers can still do as they choose.
If these other players do not have long term sustainability in mind, then they
could undercut other businesses.
A further step towards balance is taken when the consumer also seeks balance,
cooperation and long term sustainability. They may be met by the supplier who
wishes to do the same. However, if there is no systematic way to help ensure
that what they purchase is balanced by what is supplied, the entire system could
still be out of whack.
The “missing link” is found when an entirely new sector is imagined that has
as their full time occupation, the job of matching supply to demand. If this
happens, then symbolically, there can be real, long term sustainable practices
as work is done to ensure that balance is obtained. When a supplier has too much
stock on hand, they might have to dispose of it. Rather, a person can work to
match supply to need at a sufficient level of detail so that there is more
balance in the local economy and thus a better chance at long term
sustainability.
The above text was written in 2017, and is
available as a downloadable
PDF, with icons depicting the various stages of the model. The distinction
being made here (in 2022) is that it is more likely that balanced supply and
need will occur when population and geographical area are balanced in manageable
sections, rather than attempting to find balance in a population of millions of
people covering millions of square kilometres.
A transition to an improved system involving people is similar to a
transition to an improved system using code. For the programmer, there are steps
to take to minimize system downtime when a transition is made. One of these
steps is to make changes on what is called a "staging" site. When these are
proven stable, these changes are then applied to the live, production site. When
people volunteer for an activity and understand the risks, these risks are
perceived differently than if the same risks are experienced involuntarily.
Thus, creating a multi-person system on a 100 hectares for the purposes of
testing, recruiting people who voluntarily engage in the project is
expected to have better results than if they expect success from the beginning,
without understanding there is a risk of failure. A survey of intentional
communities by Diana Leafe Christian revealed that the majority of these systems
failed. Looking at the numbers involved in each project, it was seen that they
were quite low in most instances. Increasing the number of people to increase
the diversity of skills, trades and professions is expected to result in better
outcomes.
Using known and tested examples providing from tracking changes and upgrading
software, it is then possible to conceptualize steps needed to track changes and
improve existing systems involving people. One way to do this is to create an
actual model system on actual land, where the layout of the community
is similar or identical to the schematic at the top of the page. Participants
are from select trades and professions, already skilled and with experience.
They would join in the same way they would take on a job at a company. The
difference is that they would own property on the site in the same way that they
would own a property in a subdivision. They are allowed to do what they like,
within established parameters. A major requirement is that they provide their
established knowledge and skills in their chosen trade or profession to provide
specific products or services to the rest of the participants on the same site.
Although it may take a significant effort to determine this, calculating how
much time per day, week or season is needed to meet the needs of the other
project members is expected to determine a number of hours, days or weeks per
year they will have left to ply their craft outside of the community.
For example, work for some trades is seasonal. More work is expected in general
during the summer months. During the winter--when work is often slow--time could
be utilized to perform similar work "in-house"; that is, within the community.
If the carpenter/cabinetmaker is working off site during the summer months, they
may then have the winter months to upgrade kitchen cabinets for a number of
community members. The same would go for the mechanic. This is where computer
based modelling could be engaged to provide precise numbers and expectations for
community members before they actually set foot on the site property.
The simplest way to transition to an improved system (and the one deemed most
difficult to implement due to inertia and engrained habits) is to use existing
systems, but use them differently. Engrained beliefs include the notion that
geological changes have occurred slowly over eons. Channels cut into solid rock
were caused by erosion over long periods. The deeper the cut, the longer it
took. Careful observation shows, however, that a second alternative is possible.
The change occured by a different method, similar to snow drifting to the
leeward side of a snow fence, and it took much less time. Changes occurring
slowly--when they occur to people--make sense when the people to whom they are
occuring are not aware of the process, and those making the changes wish it to
be that way. However, many events in life have a definite beginning. This
definiteness is made possible when the person making the change is aware of what
is happening and is doing so on a voluntary basis. Changes happening to people
who are not aware of the process are expected to build up resentment.
Changes happening to people as a result of their conscious decision are
not expected to build up resentment, because they went into a situation
fully aware of what the possible outcomes might be.
To make this second alternative possible, then, it is required that the
concept be explained as fully as possible. This is what the information
presented on this page is doing. It is taking an individual (it so happens it is
the individual that designed the model), and looking at the model from the
perspective of that individual. What does it look like from there? What can be
expected? What is needed to make it happen? Thus, there is a top-down approach,
a bottom-up approach and a "tackle-it-by-diving-in-at-the-middle" approach. When
all three perspectives are combined, a sense of how that model will actually
work should emerge.
An example from the author's experience may help explain. There is a tendency
to create models digitally. Thus, cars and machinery are designed using Computer
Aided Design (CAD), and then built using that design. There is a high degree of
precision using this method. However, it does not use a real-world model that is
replicated. It is digital and can't be walked around, touched and looked at as a
solid object can be looked at. When building cabinets, a master craftsman
prefered using what he called "the stick". This stick was the length of the
cabinets and had the precise locations of where they began and ended, as well as
where the divisions were. A key aspect of this method--which made it preferable
to using digital means--was that "it never lied" (his words). That is, when
carrying that stick to the job site, it was guaranteed to show what the
measurements ought to be. It always worked. In the same way, if setting up a
model on a plot of land, that really is the way to do it. Once that is done, it
is possible to look at it, walk around it, touch it. There is simply too much
information bound up in the 3D environment to capture it all in digital format
and explain it using words, or even video. The best way to explain it is to do
it, and let people observe. If it is done right, few words should be needed.
A model needs to be used to make a transition in an existing system to keep
that transition on track. Different tasks require different mindsets. The
mindset used to develop a model on paper or using a computer is different from
that used when building it or working within it. Without the structure defined,
it can't be followed, and the "lets-do-it" mindset won't get it done right. It
takes years to build up a set of skills within a trade or profession. It is that
experience which is relied upon when performing the work. It is only those
without that experience who would want to take shortcuts or eliminates steps.
Those with the experience will know what happens when it is done right, and when
it isn't. This is true whether building a structure with physical materials, or
whether engaged in a conceptually-based profession such as engineering,
architecture or programming. The structure (physical or conceptual) built with
tried and true methods is the one that will last. The one that isn't is the one
that won't, or one that will start to deteriorate within a decade, rather than
lasting thirty or forty years or more.
The author has experience in both building and in programming. In addition,
he has seen the results coming from a long-term, year-by-year approach, where
seeds planted in the spring are harvested in the fall, are used to feed
livestock throughout the winter, and until the next harvest again. A short-term
view which expects results in a few weeks won't work, when a seasonal, and
yearly cycle is required to see the results. This means that the farmer,
builder, or programmer gets up and works for day after day, and month after
month, before seeing the fruits of their labours. In the case of the design of
this model, the same cycle has extended to five to seven years. However, the
principle is the same. Giving the project time to mature lends confidence with
the subject matter. That confidence, stemming from familiarity with the plan,
makes the difference. The same design viewed by the architect with only a few
years experience will appear different to the architect with a decade or more of
experience.
Suprisingly, the history of electronic digital computing--relative to the
history of the industrial revolution (first) and to the development of western
civilization (second) is both relatively brief and it has been suprisingly
rapid. I have looked at a photo of my great grandfather on the farm kitchen
wall. Beside that photo, and from the same era, was an image of a hayfield,
stooks of hay, a wagon, a horse, and the people who harvested that hay by hand.
This would have been from the eighteen hundreds or early nineteen hundreds. At
that time, electricity was discovered and in use. Scarcely fifty years later--in
the mid 1950s, the ability to inscribe circuits on silicon was discovered,
leading to the advent of the integrated circuit, and later, the micro-computer.
Now, fifty plus years after that, it isn't enough. Thought has been found to
influence the ever more complex machines, so sensitive, they are affected by the
state of the operator. They need to be calm to keep them operating as they
should. Finally, it may be that it is possible to dispense with circuits
altogether and move to a cyrstal based structure; upon which the thoughts of the
controller may be directed to imprint their intent upon the structure of the
universe: to heal, transport... and produce power. What has caused this leap,
and how are the technically inclined to keep pace; without loosing their stride
amidst the changes? It may be that the physical, nutritional and conceptual
structure provided by an integrated community design could help with that.
To facilitate discussion about the various levels of computing, especially as
it relates to retaining a graceful low-tech fallback, a number of terms will be
used to distinguish between the possible types of computing which may not
otherwise be explicitly defined. The first is mechanical computing,
which is defined here as the process of using a machine or mechanics to perform
a calculation, analog computing, which uses actual quantities such as
air pressure, temperature, distance, weight and so on, to make decisions.
Manual computing which uses pencil, paper and numbers to perform
calculations such as addition, subtraction, multiplication, division and
differential equations. Finally, electronic digital computing is the
main form we use today, where information is reduced to binary on-off bits and
decisions are made with miniaturized logic gates.
One of the reasons I am not fond of making videos, is that the experience of
making videos is not at all like the experience shown by the video.
There is not a one-to-one correspondence, unless the camera is simply
pointed at the subject and used as is. This would normally be considered "raw"
footage and, in my opinion, is the lowest form of video production. A high
quality video is a lot of work. It is, in fact, a reason I am going to the
lengths I am going to set up the community model. It comes with built-in digital
video production, because it will employ someone both gifted and trained in that
field, and this person will be distinct from the subject of the video. In
general, this is the best way to proceed in any field; that is, to specialize
and then put the results together in a cohesive whole. It is a reason why $50
million was allocated for one of George Lucas' later Star Wars movies, and why
people have flocked to the theaters to see them when released. So, this is a
reason why I typically don't make videos, it is not an analog
experience, where the footage seen by the viewer is what the subject of the
video experiences.
Now, the following are a few examples of the use of analog feedback
to make a decision. Weight: A pressure sensor at the top of the
bin trips a switch which turns the auger off when the grain reaches it.
Air Flow: Louvers on the exhaust side of the fan open up as a
result of air pressure from the fan when it is on, and close when the fan is
off, preventing a chilling back draft in the colder months of the year.
Temperature: Warmer temperatures heat a strip of metal, causing
it to expand, opening a contact and turning the furnace off.
Position: The mail is delivered and the person delivering the
mail swings the flag up on the mailbox, indicating to the home owner that there
is mail. If the mail delivery vehicle goes by and there is no mail for that day,
the flag does not go up. In all of these cases, there is a single decision being
made, but in each case that decision is conveyed by different means. Finally,
all of these have been experienced by the author first hand over numbers of
years. All these methods are reliable, at least (if not more) reliable than
their digital counterparts. If something does go wrong (such as grain
or dust getting behind the solenoid switch at the top of the grain bin) it is
easy enough to fix. The problem can be seen and the part cleaned or replaced.
When first embarking on this path a dozen years ago, I did not think that I
would reach the point where I would be able to connect the dots...reaching far
back into history, and finding relevance in that to today. There is too much
difference--on orders of magnitude--and that is precisely the problem. In fact,
these non sequitors are found not just in computing, but in ancient stone
megalithic structures (which display remarkable feats of engineering), and in
religion (which is the field I first chose, assuming its relevance to the
problems of the day and for the eternal soul). Could there be a reason for this
"meta pattern" emerging? For example, even reading the "History of Computing"
article on Wikipedia[1] with eyes half closed, one can't help but wonder what
happened after the rise of monotheistic religion. Before that time the
list of achievements was extensive: Sumerican abacus, a differential gear (later
used in analog computers), a sophisticated Sanskrit grammer, a mechanical
principle of balance, and the Antikythera mechanism. Not to mention Euclid's
geometry and Archimedes other achievements. Brought to mind is such basic
computing tasks as counting using a one-to-one correspondence, where
marks on a stick corresponded to bushels of wheat or corn in the storage bin,
for example. How is it, that the all supreme (G)god--who was concerned with his
people--did not continue to ensure there was enough for the entire year, by
making this part of the gathering each week? This, in fact, is a distinct story
in the Old Testament.
In other words, one of the emphasis of the design of the community structure
being developed is that the more advanced does not eclipse the less advanced.
This then would include the need to know if there is enough food for the entire
year, by counting the number of people in the community and balancing that
number with the kilograms of food expected to be harvested for that year, as
well as what is stored in the pantries, freezers and root cellars. Instead the
focus of the monotheistic religion (which required roughly 15% of ones time, and
10% of one's income) was essentially to forget all that face the front,
and listen to a person talk about everything but that which was directly
relevant to one's every day life. On top of that, no questions were allowed
during this weekly one-way lecture. The only number that counted was (a) showing
up, and (b) making sure enough money was put in the plate each year. And this
was one to two thousand years after the remarkable achievements
mentioned above. Then, despite this disconnect between the expected weekly
behaviour of the entire community, and the relevance of what was taught during
the weekly one-way lectures, the admonitioned was leveled from parent to child
to be... realistic, practical, down-to-earth. To cut to the chase, it sounds as
if something was cut out of the fabric of reality, and that something was hidden
in the business of life created to hide that gap. This gap shows itself again in
the rise-fall-rise pattern of technology seen over the past 3,000 to 6,000
years.
The single thread running through the previous two paragraphs (and though a
single thread, it is highly relevant) is the process of counting food,
counting people, and counting days. Even today, with palm-sized supercomputers
vying for attention on city sidewalks (and winning), this simple, basic task is
missing. Supermarkets know how much stock they have, but this isn't related to
the number of people in the community they serve and the number of days until
the next harvest. Stock is likely kept based on how much is sold, not
how much is needed. How much is needed is simple, it is roughly a kilogram of
food per person per day. A kilogram of food takes roughly a litre of space to
store. A litre of space is 10cm x 10cm x 10cm. There are 365 days in the year.
Therefore a single adult requires approximately 365 kg of food per year and 365
litres of space to store that food. A family of five (calculating for all
adults) requires 1,825 kg of food and 1,825 litres of space. Despite the
simplicity of these calculations, and the necessity from which they stem, the
author has not seen these posted or listed anywhere, nor talked about. This is
an example of the more advanced technology bypassing the less (but essential)
advanced technology. The net result of equations like this is that the
programmer... is not needed in an integrated, space limited community for it to
function. These types of considerations can be--and have been--performed with
paper and pencil, not complicated computing equipment. In fact, they may get in
the way for something as basic as this.
The elements of programming are typically seen as digits and logic, stitched
together with programming languages. Of these with which the author is primarily
familiar are PHP and SQL, with some knowledge of JavaScript. HTML is a markup
language, not a programming language. It does not make decisions. CSS has to do
with styling. It can make decisions, but related to presentation, such as
adjusting styling based on the width of the browser window. There are advantages
to using different programming languages for different emphasis. Some coders
will prefer a language other than PHP for web development. However, when the
code is compiled and run, what is seen is the output of that code, not
the code itself. If careful thought is given to the relevance of the
output versus that of the input (the code), it can be seen that no
code at all is acceptable, if the outcome is the desired one. This is not
the typical path a person who has completed a Computer Science degree would
take. However, it becomes more understandable when considering the history of
computing, apart from electronic computing. Not all computing is
electronic. It does not have to be to produce valid decisions. With this in
mind, analog computing is acceptable, and may even be preferable, if it
results in better, more reliable decisions. Put a different way, beginning with
three dimensional shapes, weights and pressures and torque and then
abstracting those may be the better way to proceed, rather than beginning
with the abstract digital representation and moving from there to the three
dimensional counterpart.
The previous discussion on the distinction between mechanical, analog and
digital computing leads into the the discussion of how abstraction
occurs. Schrieber (sp.) makes the observation that points, lines and planes
arise from shapes and solids, particularly the tetrahedron, which has four
points and four lines, learning from ancient Egypt. The modern western world has
replaced a focus on shapes and solids and their immediate utility with a
fascination on the abstract and digital, which has no immediate utility to the
physiological human unless translated into an object that has a use; such as
food, clothing or shelter. The Pauline emphasis on knowledge (gnosis or logos)
and subsequent denigration of the material has helped this transition. Occam's
razor does not support the reduction of an idea (or an object) beyond the
minimum required to represent it. That is, the smell of food cooking, a picture
of an item of clothing or a For Sale sign in front of a house do not replace the
objects they represent. This then directs the flow of thought to the critical
minimum of a self-sustaining community built for actual people with real
buildings, clothes that keep warm, and food that nourishes. When these are in
place, the people in that system can then focus on the more abstract
topics of truth, beauty and goodness, while retaining the former.
If it is accepted that points, lines and planes arise from solid objects, and
not the other way around, this then provides a way forward when thinking about
the best way (or various ways) to set up a self-sustaining community. One way
that works is to set up an empty container (say, a plot of land) and
imagine what is on that plot of land, and how the various elements
interact. This was already done by the author on a soccer field, with stakes and
Sisal twine, and enabled the visualization of the size of the dwelling structure
as opposed to the size of the property. This helped to set the size of the
dwelling structure relative to the size of the property and to ensure adequate
spacing between it and potential outbuildings such as a writing cabin or a small
cabin sized workshop. To bring in knowledge of our neural wiring into the mix:
we are wired to think, move and act in a three dimensional world. Doing so helps
us to refine our thoughts. Often in doing this, a vague idea is refined or
sometimes rejected as what actually happens isn't always the same as what is
thought will happen.
In particular, using this method to build out the idea for a self-sustaining
community, a bare minimum design has been created that requires
approximately 5 hectares of land (12 acres), a central commons area of 1 acre
(~0.4 ha) and three pie shaped wedges around it at 2 acres (~0.8 ha) each. The
total land allocated within the major circle is 7 acres (~2.8 ha). Each of the
pie shaped wedges is then provided for a single adult, where they ply their
trade or profession. Their trade will have one of three foci: thinking, speaking
or doing. That is, one will be better at thinking (than speaking and doing), the
second will be better at speaking (than thinking or doing) and the third will be
better at doing (than speaking or thinking). A challenge, however, is that each
of the three will be able to operate at a critical minimum for each of the other
two proclivities that are not their preferred, natural strength. Each of them
will interact and do business with the others at the boundary between their
property (the 2 acre pie shaped wedge) and the circular 1 acre commons area,
agreeing to respect the other's property and not come on it unless invited. This
means that the individual is fully responsible for what happens on their own
property, including building the buildings on which they live; a unique
challenge that some would heartily welcome. This absolute minimum design then
would allow the model of a self-sustaining community as it has been developed so
far to be refined before committing more time, resources and people to a larger
or full-sized project.
Those who do not understand programming may think that it is difficult to
understand at any level, and so don't even try. Consider, though, that
the basic tools of programming are logic and premises, tools we use every day.
And, or, because. I do this because of that. If this happens then I do A,
otherwise I will do B. If it were not possible for us to make decisions like
this on a daily, hourly, or even a minute-by-minute basis, we wouldn't be alive.
Live is dynamic, and trying to capture it and put it in a box is the first step
to taking the life out of it. Thus, being able to make a fresh start is one way
to improve. Consider how "baked in" cities are, especially the older
cities. They cannot change because they have been that way for hundreds of
years. Even the newer North American cities, which have been established for no
more than 150 years, become calcified. People live, move and have their being
inthem based on the designs created when technology was primitive compared to
what we have today. Now, with improved knowledged and technology--ranging
through from communication, transportation, building materials, building design,
energy creation, transimission and storage, nutrition and so on--would it be
wise to attempt a new city design based around this cutting edge
knowledge and technology?
This is the starting point I took after first hand experience with systems
that were not working, that were sucking the life out of people, rather
than invigorating them and making them stronger. This ranged from industrial
agriculture (which has been replacing the family farm over the past generation),
to trades (which may provide the owners with a livable income, while the workers
build nice things for others, but not for themselves), to the CSA (Community
Supported Agriculture) farms, (which requires volunteers or those working for a
stipend for the owners to make a go of it). I am defining that a key part of a
functioning community is walkability. Can I walk where I need to go, in the
time I have to go there, without losing my stride throughout the day? Are the
various aspects of the community close enough to get to, while being far enough
apart so they don't get in each other's way? Though this concept is simple, its
implementation is spotty. In nearly half a dozen towns and villages I have been
to over the past few weeks, few have this concept implemented, so that getting
from one part of the community to the other--by walking--is easy. This is where
the principle of recursion comes into play. What is the best starting point? The
center (the commons area), the dwellings, or the working areas (the workshops)?
In making the design shown above (D5H), which is the absolute minimum that I
have been able to come up with to date, I ended up with the commons area being
one third of the diameter of the allocated property (which is a circle). The
circle is contained within a square. The square is required because most
properties have ninety degree angles. They are either squares or rectangles. A
border is allowed between the allocated property and the perimeter of the entire
plot of land. This border is set at 15m in the current rendition. When
explaining the idea to someone verbally, I realized it would be better to use
recognizable numbers and units for the allocated properites. Land is still sold
in acres here (Canada), even though we have been metric since the '70s. Thus, I
began with a close metric equivalent (5 hectares) and started working within
that. A reasonable (and recognizable) size for the commons area is one acre.
When the one acre size is one third of the total allocated area of the major
circle and the three properties in "the hub" was set at two acres each, the
total allocated area (B) is seven acres (2.8 hectares). If this sounds
complicated, it is, and a reason why I have included the formulae for these
calculations at the bottom of the diagram. To sum up, recursion (or repeating
the calculation until the desired effect was achieved) was used to come up with
a replicable and testable design. I have open sourced this particular diagram so
that it can be adjusted as needed, without permission required to make changes.
In front of each one of us lies the answer to our problems, if only we know
how to look. But looking requires knowledge, and knowledge needs to be tempered
by experience to be effective. The question I have been asking on the morning I
am writing this is: "How far apart should the buildings be on a property so that
I can get to them soon enough, without being bothered by what is going on in
them?" (i.e. so that I can focus on work). A related question is, "How far apart
should the buildings be between properties so that the same dynamic applies?"
(far enough so that the individuals living on each property are not bothered by
those next to them, while still being close enough to walk over for a visit,
from time to time). Based on the schematic designed in the past week, this has
worked out to 36 metres (50 paces) from building to building, on the same
property, and 60 metres (85 paces) between buildings on adjacent properties. A
key addition to the design over the past week has been a generous border around
each lot (approximately 10 metres, on average). The programming mindset comes
into play when pacing out the distances. Is it far enough for me to gain a sense
that I am "away" from where I was a few moments ago, yet close enough to "get
there" if I forgot something or need to transition from one task to the other.
The distances created appear to do just that.
One of the parameters used to make these decisions was the number of paces
walked, not merely the distance as measured in metres or yards. The
reason for this is that each pace is a physically noticable action required on
the part of the person walking. Beyond a certain number of paces, the perception
becomes that it is "too far". Too few paces and it is "too close". To get it
"just right" requires some experimenting. For the author, part of that
experimentation has already occured over an 18 year stretch, as he was growing
up on a farm in Southwestern Ontario. The distance from the house to the barn
was approximately 36 metres, and the distance from the barn to the workshop was
a bit less, at about 24 metres (less than the distance from the barn to the
house). The distances worked well. For simplicity, the distances created between
buildings on the same property in the current schematic are equal to each other;
three equal sides forms an equilateral triangle. Even numbers are used (which
are also multiples of 12) as experience shows these are easier to work with when
determining halfs, thirds and quarters, than are odd numbers (which result in
fractions, if halved, for instance).
Ten days of experience showed that a area of about 360 metres on a side
(~1,200 ft), is enough in which to work and move around comfortably, but not too
much that it becomes a bother to have to walk from one place to the next. That
is, if having to get something of an office supply nature, or purchase food for
lunch, that approximate area worked just fine, without feeling too constrained.
Following this, the next step was to move up to envisioning this single cluster
(or hub) size as part of a full-sized community, designed using this demo 30
acre plot schematic. The full-sized community layout had previously been set at
a round 100 hectares (equal to 247.1 acres) or 1,000 metres on a side. Adding
buffers around each property, but compacting the schematic using pie shaped
wedges, generated the D300A-6H schematic (shown below). At the same time that
this was done, the author was present in a picturesque portion of Ontario, with
huge tracts of undeveloped land nearby. Land isn't a problem. How we manage it
is. This led to ensuring the entire 100+ hectares was surrounded by a buffer
zone, expected to be naturalized. Having property that backs onto a wooded area,
even if it isn't owned by the property owner, is a theme that recurs when
homeowners talk about the property they live on. People like being close to
nature, even if only looking at it or tramping through it from time to time.
Looking at this schematic, it will be noticed that the width is less than the
height. Understanding how to calculate this difference leads one back
to high school geometry. It turns out that the formula for calculating this
difference is identical to the formula calculating the height of a equilateral
triangle (the square root of three, divided by two) given the width. This is
derived from trigonometry, where the ratio of width to height is equal to
tan(60)/2, which is SQRT(3)/2.
Design for a 300 acre, 36 Lot Self-Sufficient Community
(PDF)
In watching behind the scenes footage of the making of the second episode of
Star Wars, a key discussion is how much to make digital, and how much to make
analog (using physical 3D characters, costumes and sets). This is not a trivial
distinction, as having actual characters speaking and moving is more convincing
on screen. Even if much of the emotion and subtleties can be captured
with digital animation, the volume of information conveyed in the 3D physical
world, with people moving about in it is much greater than is expected to be
replicatable in a digital environment. At least for the purpose of assessing the
functionality of a 120 hectare (300 acre) design with 36 to 180 people in it,
the ability to create this environment in 3D will provide much more useful
feedback than if creating it digitally. However, to a certain point, digital
will consume fewer physical resources and time. The question is, what is that
point?
The easiest digital environment to create is two dimensional. This can be
done using an editor which works with scalable vector graphics. Although this
format is intended to be portable between editors, the experience in creating
schematics with an online editor, downloading it and then opening it in an open
sourced vector graphics editor (Inkscape) showed the two were not even close in
terms of their interface. The online editor was much easier to use and
more intuitive (although it was less flexible). Previous experience and perhaps
formal training is definitely an asset when using this type of graphics editor.
In addition, the SVG (Scaleable Vector Graphics) file can be exported
from the online editor, but it can't be imported, perhaps due to
security concerns. This means that it is not possible to employ the principle of
having a graceful low tech fallback for these files, as working with them
offline (if the internet connection is lost for whatever reason) and then
reuploading them at a later point (if internet access is restored) is not
possible. Thus, one has to proceed with care when starting down the road of
serious design, on which the entire community depends. Ideally, offline desktop
or laptop open sourced software (such as Inkscape) would be used, so that the
work can be shared offline between users within the same community, as well as
being sharable online with the broader community.
Another major consideration in deciding the right digital/analog mix to aim
for is the experience and knowledge of the designer or designers in working with
these two mediums. An experienced analog designer can make a few strokes with a
pencil, and the outline created will be meaningful. Likewise, an experienced
digital designer is expected to be easily able to create an environment using
digital tools familiar to them. Third, if the land is available, so that a
design may be mapped out on it using pegs, stakes, ribbons, ropes or spray
painted lines, this also will reveal strengths or weaknesses of a design, as
well as suggest new possibilities. For example, building in a clearing on a
west-facing slope that is open to the hot summer sun, will reveal that this is
not a good idea. The heat, combined with the lack of a breeze, will not make
this a comfortable place to be at that time of the year. In the
winter--however--the reverse will be true; passive solar heating can then be
used to reduce external energy inputs to maintain the building at a comfortable
temperature. An option that opens up by making the buildings movable is that
they can then be kept in the shade as the daily sun progresses through the sky
in the summer months, or kept in the sun during the winter. This is not the type
of dynamic that easily reveals itself by working soley in a digital medium.
Taking the idea of moving the building to keep it in sync with the movement of
the sun during the day has to occur before any building is done, as it
affects the basic elements of design.
A Contractual Distribution Network is being defined here as a
network within which individuals have contracted to have their products and
services distributed. The model depicted in the schematic shown at this top of
this page is a version of this network where select trades and professions live
within an area of 1000m x 1000m. This can be seen as an idealized situation,
or it could be set up as an actual, physical community to determine the
effectiveness of the model and refine it if necessary. Another way to look at
this is to imagine what it takes to support the crew of a large ocean-faring
ship, with passengers. All the support systems need to be on that ship, for the
duration of the voyage across the waters. If any of them fail, or if supplies
run out, the comfort or even the lives of the crew or passengers will be at
risk. That this is done (and has been done for years) demonstrates that it is
physically possible to meet the needs of a large group of people for an extended
duration, without external supports.
However, most people do not live on large ocean-faring
vessels, where most, if not all, of their daily needs are met by design, with
redundancies built in. To make this happen on the ground, on land,
requires a similar level of engineering, planning and preparation. When a
passenger embarks on a voyage, there is a sense of committal. They are
definitely going on a journey, when they step aboard a city-to-city
bus, train, airplane... or ship. The individual preparation is met with the
preparation that has gone into the engineering and structure of the vehicle on
which they travel. This sense of committal is missing for the person on land,
when they have the option to go to one of three major brands of grocery stores,
have water coming out of the tap at the kitchen sink, can buy similar articles
of clothing at three different price points within the city they live, walk,
take the bus, call a taxi, rent a car, or purchase a car to get from point to
point, etc. Water does not surround their home, or even their
community, leaving them with the only option to "make a go" of it alone or with
the people within a ten or fifteen minute walk of them. This dilution
of means to meet needs is a result of an accepted and implicit system. It is not
easy to change, unless sufficient motive and structure are
present to allow it to change.
Having said all of the above, the focus here is on the fact that a
contractual distribution network can be set up relatively easily. In
fact, networks which allow individuals to barter are already available,
the distinction being that they are not (to the author's knowledge) part of a
system robust enough to ensure that the needs of the individual are
capable of being met (assuming, of course, that the individual wants
that need to be met at any particular time). The example that comes to mind is
that--when growing up--I sat down every night for supper. It was always there,
it was nutritious. All I had to do was eat it; which I did without fail (unless
of course, I was too sick to eat, which was rare). This picture is in
distinction to that where the invidual cannot obtain what they need to
eat. It could be too far away, they may not have the money to buy what they
need, or they may be too busy doing something else to ensure they have a
balanced, nutritious meal on a regular basis. There are fine distinctions. There
always are, but discarding the potential for a functioning, robust system with
the cry of a third-order referent (pick one, among many), is going too far. The
key difference is the number of people and the geographical area. If the
numbers, the structure, and the distances between them are balanced, with
carefully selected technology implemented within that system; it
should work. That is what this effort is about.
A comprehensive, contracual trading and distribution network operating
primarily within a limited geographical area where products and services are
marketed and moved through a fine balance of competition and cooperation.
Competition happens in through the rate of delivery, quality, durability and
product support, etc., Cooperation happens through acknowledging others service,
knowledge, skill and presence is needed. The network is time-limited
through a five year cycle that ensures "freshness", where the entire system is
moved to allow for change and prevent stagnation. This movement may be physical,
conceptual, and include the movement of participants into and out of the
network.
Distribution - not necessarily one-to-one trading required. Limited - Ideally walkable. Network - Embedded within existing systems possible. Comprehensive - Design to provide 75%-100% of peoples needs for
the duration of the cycle Contractual - Particpants agree to provide products or services
according to design specifications to ensure needs are met and products and
services are moved. Distribution - Products and services are marketed and moved
through the network to product and service freshness and to ensure needs are
met.
While it is theorectically possible to place a contractual network anywhere
on the globe, it is easier to develop it in areas that are familiar to its
participants; that is, beginning where they are, rather than requiring them to
move. Having said, that, it may be beneficial for them to move away from an area
they have been living for a number of years, to a spot that is less familiar to
them; but not too far. Moving more than a several hours drive away from the area
they grew up, or have been living for a number of years, results in a greater
change. This change may be stressful in subtle ways. When moving east or west,
the sun rises and sets at different times. When moving north or south the season
contracts or expands correspondingly. With that, the terrain may change, as well
as the plant and animal life. And along with that the type of
industries will shift. Moving north in eastern Ontario, the industries shift
from pastoral agricultural to tourist, lumber and mining. If making a change
without being aware of these dynamics, years may go by and differences noted,
but without precisesly knowing what these changes are, the changes may
affect the participant in subconscious ways, not all of which may be beneficial.
For example, a shorter summer and growing season is due to less light. This may
result in a depressed attitude, if steps aren't taken to adjust for changes in
light during the year. This could be as simple as increased intake of vitamin D
during the colder months of the year, improved indoor lighting during this time,
a more comfortable ambience, such as with a fireplace, and lastly, reduced
activities as mammals typically hibernate during the winter months. Following
their lead and spending more time resting and being still could make the
difference.
Second, the design and intent of a self-sufficient community is to allow it
to operate independently of external supports, if needed. The extent to which it
makes use of surrounding communities for stores, services, power and waste
removal depends in part on the proximity of these amenities. The further away
they are, the less it will be convenient to make use of them, and the more
independent this community will need to be. Conversely, the closer they are, the
more convenient it will be. For distances greater than a kilometre or two,
powered transport will be needed. This needs to be factored in, to ensure that
powered transport is always available. Or, if not, that alternatives are
available, such as a store of fuel, food, and other consumables. Realistically,
the choice of a property will depend on the terrain, proximity to water, and the
degree to which existing nearby communities faciliate and support such an
endeavour, which include--of course--falling within existing bylaws, etc.
However, the focus here is not on a "money first, bylaw first" approach. These
considerations are placed near the end of the design process, by which time it
is hoped that it will be easier to merge the two. That is, given the need for a
sustainable approach, that is proven, practical and achievable, it is hoped that
a design which has sustainability or more precisely--self sufficiency--as its
core principles, will be spurred on by existing communities, as a
reasonable way forward.
Finally, another key concept of this community design is that it is
movable. While this certainly not part of the way in which current
communities have been designed, it is definitely technologically feasible.
Consider the fairs that happen every summer in small towns across Ontario. A
major part of these fairs are the rides and other attractions. These rides are
designed from the ground up to be movable every few days to a week or so. The
author worked in one of these for a portion of a summer. While not the best job
he ever had, it provided a first hand look into how they worked, including the
design of the trucks and flatbed trailers that accomodated the rides.
Specifally, the generator trailer housed a full sized generator that provided
power for all the rides. When taken as a whole, the entire setup resulted in
what amounted to a small city, that could be set up or taken down in half a day.
The hours were terrible, the pay atrocious, and the work demeaning, but it goes
to show that it could be done. The intent of this moving city was to provide a
thrill for the moment. The intent of a self sufficient community is to provide a
lasting model to help our planet move forward into the next generation. With the
right motivation, there is no question that a way can be found. With a movable
design being built in from the ground up, if Location A does not prove to be the
best, Location B can be scoped out and moved to within a few years. This feature
alone makes this design more resilient and adaptable than other, static,
models.
In the section entitled "Selection of Topics" a list was provided ranging
from small to large. The small was genetics, the large was galactic, if not
universal in scale, which was exoplanets. Criteria used to determine which
topics to select are: potential value, interest, feasibility and versatility.
That is, what is the potential value of engaging with that topic for the near
and long term? How interested would people be in it? How feasible is it? And,
how versatile would the information generated from focus on this topic be? To
put these topics in context, much of the activity of the past two hundred years
on this continent has had to do with resource extraction (such as lumber and
minerals), food and animal production (incorporating genetics and species), and
now knowledge based industries (including programming, data anaylsis, CAD,
etc.). As we have been using up primary resources, and they are limited, it
makes sense to begin looking for what is next. The topic that meets all of these
criteria and provides the most potential value is a focus on Exoplanets.
First, it should be noted that a focus on this topic does not mean an
exclusion of the others. Rather, a focus on this topic allows the others
to be incorporated into it. That is, genetics, species, geology and
climate all are potentially subsumed under the topic "Exoplanets". This term
would likely garnet the most interest. It would be verstatile as the survey of
these sub-topics here could be used as a template for an assessment of potential
environment there. The potential value is high, as each additional planet added
to the lists multiplies the potential found here. This naturally would be of
interest to professionals in the academic and applied disciplines and would be
of at least a passing interest for those not directly involved in the relevant
fields. This then leads to a reason for the development and testing of a
self-sufficient community: a community that demonstrates self-sufficiency here
automatically is a candidate for self-sufficiency elsewhere. Though a primary
criteria is an oxygen based atmosphere and plant life, and a suitable gravity,
The presence of a benign atmosphere is not completely required, as a dome could
be placed over the community. It could also be placed partly underground.
This "one page" website format was developed after using every possible
method but this one. The addition of one page leads to the addition of the
other; before long the site is littered with pages, some of which languish
unread and untouched for years. In addition, as mentioned, the context is lost
when a page is directly accessed and read without the benefit of the ideas that
came before or after the ideas expressed in it. A further reason for developing
this one page site is to make it more "portable". That is--in a literal
sense--it is easier to "take it with you" when that is all there is. That means
I can work on a laptop (or tablet if necessary) as I am now, stop at a location,
then keep going, writing a few paragraphs at a time, then pick up where I left
off, knowing that I have the entire work in front of me. There are no
complicated content management systems to deal with. No "online" that must be
present to work (I am working from a local server set up on the Ubuntu OS), no
databases to backup and download, no remote updates making changes at unexpected
times that have the potential to break the site, no third party costs, beyond
the cost of hosting or no "free" solutions, that have the potential of
disappearing in a few years time. Thus, this site is a demonstration of a
concept. If it lasts, and remains functional over a few years without breaking
or the need for updates, then it is a keeper. In addition, if the styling is
lost, or even if the ability to format HTML is lost (who knows?), then the base
text can still be read. This can be checked by right clicking in a desktop or
laptop browser window (not a tablet or smartphone) and selecting "View Source".
Scroll down a few screens and you should be able to start reading in the same
format that I have been typing it. That means that--if all is lost--it won't be.
It will still be accessible by using the right methods, employed by the right
people, in the right way. And that, my friends, will make a future Rodney McKay
very happy.
Write Your Own Page Using This Example
The following is a selection of HTML elements that have been used to create
this site.
When the markup used to display text on a screen for the internet was
developed, it was not intended to be difficult. Take a few moments to look at
the example above. Is it difficult? No. With this basic example and a little
styling (included with this site) a valid web page may be written.
Contact
Author's Note: This one page site is still in the process of being written.
Therefore some editing may yet be required.
Contents