Scatological Sensibilities

An ecosystem is a basically a poop-cycle.

This is a parody transcript of the advertisement for a TensorFlow based implementation of a cattle tracking system I saw on Google's YouTube channel.

https://www.youtube.com/watch?v=6taIMlZysJQ

Using TensorFlow AI to track 'animal' behavior to optimize them.

I saw similarities between the cow tracking systems on the market and what kind of data you can track on an android phone. I have been quite alarmed of late of how pervasive google's tracking systems are.

Particularly, I'm amused that google knows when you are getting out of a car, even if you have airplane mode on. I also suspect they could possibly be using barometric pressure to detect when you enter or exit a building. I'm also bothered by how quickly the industry has adopted audio beacons that only a few years ago was being used to spy on air gaped machines. I've got a passing interest in IoT stuff and mesh networking so I did some research into what google is doing for their location tracking service called 'nearby'. One disturbing thing is that theoretically any aspiring advertising mogul could recreate their stack with open source code.

I deal with things that scare me by trying to understand them. I've found one of the best ways to understand something is to relate it to something you know. For me, this means I rewrite interesting stories using jokes and subtle language transforms. It helps me map an idea space to something I'm already familiar with. The humor keeps me interested and curious despite somewhat uncomfortable topics.

This is the result:


ADVERTISER:

I'm an advertiser, and this is my people-farm in an island of the internet. People-farming is hard. It's a lot of work. If you're working with live humans, there's a lot of uncertainty. Things can change within a day. [CONSUMER SCREAMING] You just have to be really focused and keep track of it all. The consumer's health is the most important, because if a consumer is not producing ad-revenue, they will also require more attention. [CONSUMER SCREAMING] That's why we adopt new technology. We're searching for things that can help us out. And artificial intelligence can be a great tool to give Ad-Agencies more insight. The app: AdMob helps me to keep track of my consumers, so it makes life easier.

GOOGLE SUCCESS MANAGER:

AdMob is really the people-farmer's assistant. It starts with a sensor that sits in the hand of a consumer. And by analyzing the movement of the sensor, we use TensorFlow machine learning to figure out what the consumer is doing! We can actually distinguish multiple behaviors of this consumer: eating, pooping, driving, standing, running, and walking. So if AdMob sees a certain pattern in the consumer's behavior, it can actually figure out the product that the consumer is prone to get, and can advise the farmer to advertise it them.

ADMOB CTO:

The key thing with AdMob is that it learns. Using TensorFlow, an open-source AI tool that Google has authored, it learns the behavior of humans, and it gets better over time.

ADVERTISER:

That's helpful to improve the efficiency of my ad-revenue production. It also improves human productivity and keep the consumers healthy and comfortable.

ADMOB CEO:

We see advertisers as evolving. Technology and the advertiser are going to work together.

SPEAKER:

About a billion people globally are engaged in people-farming. And by solving some of these problems, we're able to help advertisers run a more efficient farm. And we can do it in a way that is sustainable for us as advertisers and for our economy.

ADVERTISER:

The one thing that people should remember is with this technology... Consumers are Just Cattle.

Scatological Sensibilities is a collection of thoughts and ideas from an aspiring cyborg who who goes by @ultimape. It consists of insane ramblings, well meaning anti-social behavior, technology, philosophy, and other interesting scat that catches my eye.

About Me

I am UltimApe.

I am a wandering scholar and informavore.

I am an ant consuming interestingness and extruding insight porn.

I am trying to figure out my place in the world while maintaining homeostasis.

I am a cyborg slime-mold brain fungus piloting a complex exosuit made of flesh.

I am a sufficiently advanced sentient abacus honed by a learning process built upon complex systems reacting to their environment. I also poop.

Contact

The best place to contact me is via keybase.

I maintain two accounts in the fediverse at the moment. @ultimape@refactorcamp.org @ultimape@mastodon.social (semi-private)

I am also on Twitter for the time being.

I rarely check my email for personal correspondence, so I don't list it here.

Support

I enjoy any and all insightful commentary; my primary purpose of writing is to explore the space of ideas that interest me and seek out peers to engage with as this learning journey unfolds.

But if you want to support me beyond just being friendly, I have a Patreon and Paypal. (BTC address available upon request.)

This is a list of interesting tags that I use or intend to use within this space.

High Level Tags:

Larger structures describing the kinds of writing on this blog.

#essay A piece exploring an idea, enough to make a point.

#longform A longer piece that digs into an idea at some level of depth. May or may not be an essay or a form of poetry.

#linkvomit Somewhere between an essay, a bulleted-list, and poetry – Usually consists of a collection links where they simultaneously explain a narrative, but also use the links as part of the narrative structure.

#poetry “ab·strac·tion” – Explorations of ideas but in the form of various kinds of poetry.

#metaphor “re·ca·pit·u·lation” – Ideas explained with other ideas.

#draft “i·dea vom·it” – Something that got to the point of being somewhat publishable despite not being a complete idea. Usually its because I don't see an easy way to wrap up the idea and it's been incubating too long.

#archive “void” – Older content that I've put here to reference, but not really something I consider featured.


Areas of Exploration

Tags that describe various interests and topics commonly explored in this space.

#politics “copro·phage” – Explorations on political topics, usually it is meta-political, but occasionally there are bits of actual politics. * I consider this tag to be an infohazard due to potential for triggering arguments. Tread with an open mind.

#ants “jag·lav·ak” – Anything about ants because I love ants.

#microbe Explorations of the microbiome and related.

Research Interests

Particularly focused explorations on a topic that could be considered part of the same narrative corpus, I consider these to be akin to sequences, but as unordered sets vs of ordered lists


#guide #context #index #map

An exchange seen in the droppings of the bird

I follow @hardmaru and @BenedictEvans on Twitter. Both are people I follow because of their keen view of their field and interesting insights. TL;DR: I follow them to learn from them.

So about a month ago, the bird surfaced this chain of comments at me:

  • “Many people simply don’t want an algorithm to decide what they should see. And we should respect that. (note: includes images)” — @hardmaru

  • “Then they’ll see 1-2k posts per day – that’s how much is actually posted. But of course no one will scroll that much. So they’ll see a random sample of whatever was posted in the hour or so before they open the app, because that’s all they’ll get through.” — @BenedictEvans

  • “I think it depends on the number of “friends” one has, but yeah someone might be dumped 1-2k posts/day if they have a thousand “friends”. But hey, if that is what they want, they should have that option...” — @hardmaru

  • “Average posts/day the average user is eligible to see is 1-2k. 150 friends, post/comment/like/share a photo a total of 10 times each, and there you are at 1500. Chronological feeds are just too big.” — @BenedictEvans

  • “Chiming in here to say my opinion here is are other options aside from: users have to see everything with no control users see what an algorithm says with no control (I think the answer is finding ways to put the user in control)” — @jezzamonn


Ranting behind a protected account. Subtweeting on steroids.

That exchange upset me. Enough that I wrote a lot of tweets in a rant.

My account was protected at the time. It was fun having a locked-down account. I could speak my mind even though I knew Benedict wasn't going to change his, much less see my rant.

Being locked-down, I could stream out an emotionally inspired rant, tagging it onto his comment knowing he's was none the wiser. He never seemed to block me, so I suspect no one told him about it. Or no one saw. Either way, I got it off my chest and I felt better.

In the end, those tweets I made were really just venting. I saw people's comments on the topic of Facebook's algorithmic feeds, and that they were causing pain. Then I see Benedict comments showing up in my feed. And I felt anger.

It makes me angry because of my own persistent lens of the topic:

“The personas targeted by shady marketing and #darkpatterns have the faces of my friends and family. I rage when I see them being targeted.” — @ultimape

So I deleted the tweets I made and instead decided to write about it here.

I'm not from the camp of people who frame VCs from SV as uncaring monsters. I choose to capture the anger I felt and try to see where he was coming from. Even in my rant, I wasn't trying to be uncharitable in my framing. I could have used harsh words, like calling him callous and pointing out how his obstinate framing of this topic is a broken record. But I didn't.

I've been chewing on these aggressive thoughts a bit to try and understand them. Digesting my thoughts. Letting the ideas stew in my mouth for 29 days. And now I'm spitting them out.

My goal here isn't to harass Benedict, but to understand him. To try and frame my own understanding of his point of view and hopefully explain where my own is coming from. To help me be less angry next time.


Riding the wave of emotional turmoil.

So after chewing on my anger for a while, I think I figured out what was happening in that original exchange. It feels like an artifact of Benedict's expertise.

The thing you have to understand about Benedict is that he's a great strategist. I'm fairly certain this is a big reason why he's a partner at A16Z. He's smart, sharp, and his narratives are strong. He can see things from the perspective of an org. He is able to take this perspective, map out the larger territory of a software space, and see how features manifest to create the desired results.

When I'm at my best, I would be flattering myself if I said I was “half as astute in thinking about how all of these things fit together”. At least in software land. There's a reason why I follow him to learn.

However in this exchange, this strength was his weakness. I think HardMaru was emphasizing. I, as a passive reader was also emphasizing. And Benedict? Well, he was in systematizing mode as far as I can tell. And like water on oil, it didn't mix well in my head.

He's not looking at it from a user experience lens, or at least if he is, he's not mentioning that framing very much. If he's looking at it from the organization's perspective, well, its not the most productive frame to take. It comes off as more of an ¯\_(ツ)_/¯ response than one trying to emphasize with the user's sentiment.

I think that is what was making me angry. It felt like he was disrespecting and disregarding people's perspective and views. His was replying with his bit about how feeds being limited, and necessarily need to be filtered. And he seems to miss the point about it expressing a feeling of control, the core issue. This a classic Empathizing vs Systematizing divide, and one that I see all too often in myself.

Replacing Outrage with Curiosity

It would be unfair of me to say he's ignorant of the way our info streams are being managed. Some of his other tweets on similar topics suggest that he's not entirely disregarding the user experiences. I've seen examples where he's more than willing to speak his mind about how shitty some of these things become under the wrong incentives.

Heck, he even pointed out the very reason why I stopped using, and eventually deleted my own Instagram:

  • “I know Instagram needs to hit its numbers, but a low-quality and irrelevant ad every 5 posts is excessive. No, I am not renting an apartment in Jersey, nor do I want to learn Russian so I can speak to women.” — @BenedictEvans

  • “If I report an ad for abuse, I really shouldn’t see it again. Or is that the equivalent of the ‘close door’ button on elevators?” — @BenedictEvans

My respect for Benedict is because of his trend-spotting, and the way he explains them. So it makes me really happy that I think I saw this before him. Though maybe I was just part of a different A/B testing group?

“I've stopped using Instagram now that every 4th post is an advertisement. I haven't posted since October. What started as a way for me to share my world and connect with people is now just another vehicle advertisers use to inject their bullshit into my world.” — @ultimape

Like a broken record.

Scrolling thru Benedict's tweets, much of his discussion ends up taking the same shape as this perception filter. Others have tried to argue roughly the same thing to him that I'm about to argue in the rant. But it is like he's got this statistic of 300 statuses and 1500-2000 limit stuck in his head.

You can see it here in his musings about the direction of where algorithms are going next:

  • “All social apps grow until you need a newsfeed All newsfeeds grow until you need an algorithmic feed All algorithmic feeds grow until you get fed up of not seeing stuff/seeing the wrong stuff & leave for new apps with less overload All those new apps grow until...” — @BenedictEvans

  • “Why can't the users get a choice? Algorithmic or Raw!” — @zahidtg

  • “raw is a filter too. you only have so much time in the day” — @BenedictEvans

  • “Raw is not a filter, it just leaves the filtering to the user, which I prefer” — @klangberater

  • “With 1500-2000 items, then the filter is time, and the ordering is random.” — @BenedictEvans

  • “Yes, but I still prefer that compared to an algorithm where my preferences are only a small part of the equation” — @klangberater

  • “Raw is transparent to the consumer; their own sliding scale of commitment vs value/reward. Algorithmic is opaque and, overtime, degrades the user’s perceived value of the feed content. The big devaluation point comes when you see you missed something you badly wish you hadn’t” — @1jrice

And when I take on his frame, I don't disagree. I even remember reading the same report that cited those statistic when it came out.

For reference, I think what he originally read back in 2013 was directly from the engineering team (or maybe he heard it thru the grapevine?):

“This allows us to prioritize an average of 300 stories out of these 1,500 stories to show each day.” News Feed FYI: A Window Into News Feed

But it might have been this Guardian piece from 2014:

“Backstrom explained in August that Facebook's news feed algorithm boils down the 1,500 posts that could be shown a day in the average news feed into around 300 that it “prioritises”.” How does Facebook decide what to show in my news feed?

Or the article in Time magazine that broke down the history:

“Facebook says the average user has access to about 1,500 posts per day but only looks at 300. (A user who scrolls endlessly will eventually see every post from their friends and a smattering of posts from Pages they follow.)” Here's How Facebook's News Feed Actually Works

And I think it will take more than my spiel here to change his thoughts on it. It seems to be part of his head cannon and narrative. A strong point in his larger theorizing. He repeats it often while building up to larger premises.

“The average FB user is eligibile to see 1500-2000 items a day. This sounds absurd, but it’s only 150 friends posting, liking, sharing 10 times a day each. And 3 seconds x 1500 items is 75 minutes. There will always be a filter – the only question is what kind.” — @BenedictEvans

Stuck in the mud

His lens and responses always seem to be about growth and effectiveness of the design in terms of the org's desires. You can look up “feed” and see it in all his responses. It is like a myopia biasing his thought patterns.

I might be projecting my own negative stance here, but it seems he's got his head stuck on the question of “what kind of filter” – it assumes that Facebook's only option is to paternalistically filter your feed for you. Like a broken record, this topic comes up, and I think he misses the point people try to make everytime.

  • “Joining Twitter, you have to spend weeks/months working out who to follow before it gets great. An automated feed is an obvious solution” — @BenedictEvans

  • ”@BenedictEvans people are stupid and can't look at lists? oh my.” — @CaseyParksIt

  • ”@CaseyParksIt that’s not a sensible view. Certainly isn’t what I said” — @BenedictEvans

  • ”@BenedictEvans an automated feed is not an obvious solution unless you think ppl are too stupid to figure this simple design out.” — @CaseyParksIt

  • ”@CaseyParksIt it is self-evident that the current UX is baffling to most people. Hence Twitter has stopped getting new users.” — @BenedictEvans

And like a car stuck in the mud, he keeps kicking up the same sort of muck while trying to dig himself out of his entrenched way of thinking:

But if Google and Facebook don't have any control of what is in your feed, and you don't have any control of what is in your feed... Who does? From where I'm standing this is why the government regulators are stepping in and questioning. They want to limit control (or take it?).

And if you read his article, he describes the very problem I'd like to highlight with the way these systems are built. A different sort of point from what he is getting at with his ideas.

“while this is explicit for Google, it's implicit for Facebook. You tell Google explicitly what you want and you don't think you tell Facebook, but actually you've spent months and years telling it, through everything you've interacted with or ignored. Facebook makes technical, mechanistic judgements about what will be in your newsfeed that are just as bound by things beyond its control – by the internet – as Google's are. It's an index of its users. Every now and then, it decides that it's got off track, no longer aligns with users, and course-corrects. [...] This means that Facebook is surfing user behaviour, and must go where the user takes it. This is why it looks like such an unreliable partner: it will invite you onto the surf board, certainly, but if you're unbalancing the board then it will push you off, and that isn't Facebook's choice. If it didn't push you off then the board would upset and Facebook would be at the bottom of the ocean as well, next to MySpace. The genius of Facebook has been to stay on the board all this time. and especially through the transition from desktop to mobile. “

And that, I think is the core problem. The reason why people don't like algorithmic news feeds is because the filter is being chosen for them. And everybody keeps talking about this, but because of Benedict's expertise and default perspective on the issue, he comes off as blind to the experience.

So if I use his metaphor about Google / Facebook being surfing platforms, I think I can frame the problem that the users are running into. The source of their frustration, and Benedict's apparent blind-spot.

The users (riders?) wants to be able to choose the style of the board based on what they are comfortable with.

  • Maybe they want a wake-board experience – to be pulled along behind the stream currents of a fast ship.

  • Maybe they are a pro-surfer and need something lightweight and highly glossed so they can execute cool tricks on the waves.

  • Maybe they're just there to hang out with their friends and need a paddle board, wanting to stay away from the waves.

But Facebook decides all of this for you, by “making technical, mechanistic judgement about what will be in your news-feed”.

And so, now that I have a better understanding of where he is coming from, I am going to recapitulate the rant I had written.


The rant

THIS IS ABOUT CONTROL.

Listen to what is being whispered, not what is being said. The subtext... the subtext is why articles like this get so many clicks: GOOGLE’S SELFISH LEDGER IS AN UNSETTLING VISION OF SILICON VALLEY SOCIAL ENGINEERING

The subtext is a fear over the loss of control. Fear over silicon valley manipulating the masses. Or someone manipulating them. Sock-puppets, bots, fake accounts, fake news. Lies. Misdirection. Forced manipulation of our attention.

This is the conversation our networked society is whispering. They just aren't saying it with the right words.

Who is controlling the Surfboard?

Facebook presents a black-box system that is constantly changing and not transparent. This leads to all sorts of emotional buttons being pushed. Loss-aversion, FOMO, etc.

People are frustrated by it. So they blame “algorithms” when the real pain they are feeling is an unease about paternalistic design.

Playing poindexter over the definition of algorithms is completely missing the point. People use the language that is presented to them thru modern culture. Algorithms are the demons haunting them; says the media. And so they parrot the language they were told to express their hatred for the demons.

Now, 95% of junk out there is conspiracy theory about Facebook. But the underlying whisper? Fear + loss of control. Over time, these feeling translate into fear of subterfuge and evil motive. But framing user-base as 1) ignorant 2) crazy is not how to generate an empathetic design.

People are so frustrated and disillusioned by the lack of control and transparency, that the governments of the world are starting to take notice. Albert Wenger talked about this in 2015 (emphasis mine):

“This is important, not just for drivers. We are all freelance workers on Facebook and on Twitter and on all these big social networks. Yes we in part we get paid through free services, free image storage, free communication tools. But we’re also creating value. And it’s not just the distribution of value that we’re worried about, we’re also worried about what do these companies do? We’re worried about questions such as censorship. We’re worried about questions such as, are we being manipulated by what’s being shown to us in the feed? And at the moment, what regulators are doing is they’re trying to come up with ad-hoc regulations to regulate each and every one of these aspects. And many of these ad-hoc regulations, are going to have completely unintended consequences. And often these consequences will be bad. Let me just give you one example. The European Union has said, if you want to have information on people who live in the European Union, you have to keep it on European Union servers. That actually makes it harder for new networks to get started, not easier. It actually cements the role of the existing networks instead of saying we need to create opportunities for competition with existing networks.” BIG and BOT Policy Proposals (Transcript)

This is Mechanism Design.

Do you know how people respond to a “random” feed? By following more people to increase the chance they'll see the content they want. My mom had 8 facebook accounts – she wanted to make sure she didn't miss any of her kids posts. Fighting an algorithm she had no control over.

Do you know how people respond to a “random” feed? By posting more content to increase the chance the'll be seen. And if their livelihood depends on it, they'll pay for and join bot-rings just to get seen Real People Are Turning Their Accounts Into Bots On Instagram — And Cashing In

People are so sick of the lack of control that there are even art projects being written that are effectively 'chaffing & winnowing' the “algorithms” to hide themselves. Confuse Facebook's Algorithms with the 'Go Rando' Web Extension

This is what happens when under-thought design butts up against basic human frailties in reasoning about time. Whispers of corrupt algorithms slither thru the user-base. They don't quite understand. It coalesces into asinine requests to get random sampling of part of their feed. “Protect us from Ourselves”

Social foraging a swarm intelligence filter? “It's Not Information Overload. It's Filter Failure.”

Design problems? Design Solutions.

So we miss point of what people are saying here. They want feature X because they have problem Y, the don't know how to express Y, but they know X would solve it. Give them Z.

And yes, there is a problem here to solve, but it is one of mechanism design.

How Game Designers Protect Players From Themselves | Game Maker's Toolkit

... But would that ruin the ability for Facebook to slip ads into the stream and manipulate people's behavior?

Its not about the algorithm, algorithm is a red-herring. People would be fine with algorithms if it was something that they could curate and opt in for themselves. Heck, the problem would fix itself if you turned on the spiggot of ordered content and presented a choice of opt.

This suggests that what people really want: better filtering tools that they can control – even if it just means getting a dashboard that lets them choose how it bias and curates and a view on what factors lead to it.

But they might make a choice that doesn't lead to facebook's targets. They might be night owls, and their friends might not be up, so they might not use the platform as much. An algorithmic feed controlled by Facebook means incentive is for a dopamine drip to boost engagement.

Or worse, Facebook decides that it wants to show how many friends have done or liked something as a form of social proof. It's nudge theory, but weaponized in a way that is outside the user's control. It feels manipulative, but users can't quite say why.

So why the focus on the feed?

What does the want of an unfiltered linear feed mean? What are people really asking for when they ask for that? What pain are they solving for when they make this request?

A linear chronologically ordered feed is predictable. Its not hiding anything, its not wrestling control away from you. It isn't manipulating you in a way you can't ascertain. That should be the baseline.

Arguing for algorithmic feeds is fine, but it should never take away a users sense of control. If something is hidden, it better damn well be because I asked the system to explicitly hide that kind of thing from me. I don't want some hidden algorithm tuned to manipulate me, and I especially don't want it presented to me under a guise of paternalism. That smells like bullshit.

But of course Facebook hasn't done that. They started giving all the 'likes' you had liked pages, and handed control over to those pages to random people. Suddenly your feed was full of content from brands who had snuck in thru girardian style mimetic signaling good. And they're using it to manipulate us, as far as we can tell...

And its creepy.

“So half the Earth's Internet population is using Facebook. They are a site, along with others, that has allowed people to create an online persona with very little technical skill, and people responded by putting huge amounts of personal data online. So the result is that we have behavioral, preference, demographic data for hundreds of millions of people, which is unprecedented in history. And as a computer scientist, what this means is that I've been able to build models that can predict all sorts of hidden attributes for all of you that you don't even know you're sharing information about.” Your social media “likes” espose more than you think

People started complaining they couldn't see all their friends. But the options at the time ran counter to Facebook's intention of being a platform for celebrities, brands and community building thru pages. They are only just now undoing this crappy mechanism design mistake.

Even their ads don't admit the mistake. They talk about friends and friends of friends. and all the crap that started polluting the feed. But the cats out of the bag and the ecosystem is polluted with people who have built up lives around those pages.

Facebook Here Together (UK)

And when we step back and wonder what is going on? We see something fishy and it smells rotten. Facebook moves 1.5bn users out of reach of new European privacy law

TL;DR: Is this Loss?

The incentives aren't there, and the arguments for changing this are misunderstood. Which is why I deleted my Facebook even though its the only way I can contact my dad.

I miss my dad.

And now you know my perspective.


#rant #socialmedia #technology #userexperience #ifightfortheusers #refactoring #techdetox #detoxingthecommons #growthfetish

A map will go here.


#guide #context #index #map

The double sided nature of humankind.

A swarm of people sharing ideas.
Barking mad at humankind.

On Ownership of Thought.

The worst sin a human can make: is to own an idea.
The best win a human can make: is to own an idea.

To own an idea,
is to possess it as part of the self.
To own an idea,
is to confess it as part of the self.

On Tempering Thought.

The worst sin a human can make: is stealing an idea.
The best win a human can make: is steeling an idea.

To steal an idea,
is to take it for your own.
To steel an idea,
is to make it for your own.

On Amplifying Thought.

The worst sin a human can make: is to boost an idea.
The best win a human can make: is to boost an idea.

To boost an idea,
is to take it for your own.
To boost an idea,
is to make it for your own.

On Offering Thought.

The worst sin a human can make: is to sell an idea.
The best win a human can make: is to sell an idea.

To sell an idea,
is to take it as your own.
To sell an idea,
is to make it as your own.

On Characterizing Thought.

The worst sin a human can make: the idea that ideas are properties.
The best win a human can make: the idea that ideas are properties.

To take an idea as sellable artifact,
is taking apart of an existing in a swarm.
To make an idea as sellable artifact,
is making a part of an existing in a swarm.

On Valuing Thought.

The worst sin a human an make: to profit off an idea.
The best win a human can make: to prophet off an idea.

Stealing an idea, boosting it, treating it as a property of humans,
possessing it, selling it, making it as your own.
Steeling an idea, boosting it, treating it as a property of humans,
confessing it, selling it, taking it as your own.

On Generating Thought.

Confessing an idea possessing my mind,
barking it into the world.

When we be propheting from an idea.
we’re barking it into the world.

When we be profiting from an idea.
We’re hocking it into the world.

Ideas are our pawns, promoting them to create valuable pieces.
Ideas are our pawns, promoting them to create valuable pieces.

The problem of having an idea is when you think they are your own.
When you possess an idea, they possess you.
To have your own ideas is to be owned by them.
You do not own them. They are simply borrowed.

Boosting an idea by having someone steeling it.
Boosting an idea by having someone stealing it.

Borrowing an idea from the network.
Reinforcing it.
Returning it better than you found.
Paying your dues.

On Arbitraging Thought.

The idea that thoughts are properties that can be bought and sold.
The idea that thoughts are properties that can be traded and shared.
it means: that you can steal it.
it means: that you can steel it.
and to profit off it as if your own
and to prophet off it as if your own

To take an idea.
One built for someone else,
One made for somewhere else,
One birthed forth by a community of people.
Bringing it in to one’s own, and selling it.

To make an idea.
One built for someone else,
One made for somewhere else,
One birthed forth by a community of people.
Bringing it in to one’s own, and selling it.

On Thoughts of Thought.

Ideas are not property?
Ideas are communal properties?
Ideas are properties of communities?
A property of a community is having ideas?
they are a property of the community of the minds that created them?

Do they own them?
Do they make them their own?
Are they owned by them?
Who owns these properties?

On Singing Thought.

Those who sing songs.
Singing the songs that were taught to them.
Songs of the minds that stole them from their ancestors.
Recapitulating ideas; barking them out in to the world.
Singing thoughts like the wolf howls at the moon.

Ideas are communities.
Ideas are properties.
Properties of the human condition.
Properties of a way of being.
Properties of existing in swarm.

The double sided nature of the human mind.

A swarm of people sharing ideas.
Barking madness at the human mind.


#poetry #whatarewords #explorations