Introduction

So, most people are probably more familiar with social curation than they may know. In fact, many may be surprised by just how ubiquitous and, at times, invasive pervasive social curation is in their lives. In many ways, social curation describes the creation of pop culture as much as it describes the creation of societal norms at large and affinity spaces at small. It seems to be almost a natural, intuitive consequence of humanity when placed within a societal context. This organizing principle streamlines society and helps people get along, for the most part. But, what happens with this natural, human process is no longer purely an organic one? What happens when machines and AI interfere in the process? The interference of AI and other machine learning algorithms in social curation processes that occur in online spaces has, in many noticeable ways, allowed for the creation of so-called internet “echo chambers”,  for an increased reliance (especially among younger users) on evaluative features like “likes” for self-validation, and for what seems to be an overall depreciation of IRL experiences. At least, that is what this research proposes~

For those who really do not know what social curation is, according to a helpful comment left on a nifty Quora query, it is:

an organic activity that continuously aggregates and ranks content deemed most relevant, valued and of the greatest utility (e.g., “just in time” insight) to users. Sources of content can be published media, real-time information exchange (archived), or continuously evolving content (e.g., wiki, Quora). The social dynamic of content curation is individual and collective input, output and evolution of thought

As stated, social curation is essential a kind of organizing principle that ranks content as more relevant based upon our interactions with the content. If a majority of people find something more engaging or more interesting, then it is ranked as more important and so will be more ubiquitous through a given system by merit of that social popularity. In analog, this kind of process can be seen amongst celebrities quite easily. If a lot of people demonstrate interest in, say, the lives a certain family via watching their reality show, liking content on their social media accounts, buying products from their makeup lines, the more likely tabloids or the media are going to report on that family which causes more people to know about this family and engage with the aforementioned content, creating a kind of feedback loop until the next big thing swoops in and captures public interest. Social curation can be quite fickle but you can see how the creation of a feedback loop like this can be very profitable for those in those loop. In online spaces, social curation operates mostly the same with some distinct differences.

Discussing Concerns

One of the most important differences to keep in mind when it comes to social curation in online spaces is the sheer size of the audience–of the society–that is responsible for deciding the significance of content. If something “blows up” in an online space, it goes nuclear fairly quickly. Within minutes, a global disaster or a celebrity are top #trending across multiple social media platforms. On Google, that event will be the top search result. In most cases, it will be suggested by the search bar before you even begin typing. Op-eds will be disseminated as authorities on the matter before the dust has even settled. This can unfortunately lead to the rise of one of the first troubling issues associated with social curation in online spaces and that is the formation of filter bubbles (i.e. echo chambers). Filter bubbles online are commonly spaces where one interacts only with content that tends to agree with their worldview, thereby severing them from engaging in critical discussion which may expand their viewpoint. While some filter bubbles can be fairly innocuous and may be more aptly identified as “affinity spaces”, such as those that arose in the early days of the picture-sharing site Pinterest (Hall & Zarro, 2013), far too many filter bubbles online are often ground zero for hatred, intolerance, and bigotry to fester. Learning algorithms, which are designed to share content that is increasingly similar to content users viewed prior in order to capture a user’s attention for as long as possible so they can view as many adds as possible and upvote videos (being ranked by user views), certainly have not helped the situation (Zeynep).

More often than not, echo chambers amplify negative viewpoints which may these viewpoints appear to members of this group as if they are more popular and prevalent throughout society than they are in reality. Many articles (“How echo chambers & context collapse revolutionised societal discourse”; “The Echo Chamber Effect”; Grimes, 2017) and much research (Boutyline & Willer, 2016; Khosravinik, 2017) has established that the rise of echo chambers has led to greater radicalization of society. The illusion of an issue’s significance gives people the legitimization to take actions they may not have were they not bolstered by the messages they were receiving in their online communities. In this way, social curation that occurs in online spaces can have incredibly real affects in the IRL realm. For example, far-right extremists in the US have been found to have connections to far-right, radical-leaning groups in online communities on 4chan and Reddit. Members of so-called “incel” (involuntary celibates) communities have disseminated and spread murderer Elliot Roger’s manifesto and confession tapes online (on Youtube, specifically), leading to several attacks inspired by the contents. While it may be inaccurate and inappropriate to fully blame social curation practices in online spaces for these negative consequences, it seems that, in conjunction with learning algorithms, both facilitate echo chamber formation which can lead to the kind of culmination of a fatal hatred.

Perhaps slightly less life-threatening but altogether still troubling is the effect that social curation appears to be having on self. The evaluative features like “likes” on Facebook and “<3” reacts on Instagram and Twitter that propel most digital social curation processes appear to be having some very serious, neurological effects on us, particularly on younger users (Sherman, Payton, Hernandez, Greenfield, and Dapretto, 2016; Dellinger, 2016). Social media sites, specifically their evaluative features, are becoming an addictive substance, like drugs or money, to many younger users who experience a “rush” when being “liked” or “<3”d on their social media. The same areas of the brain that light up when experiencing pleasure, light up when “likes” on social media are earned. Dopamine is released and a high is experienced. This reaction leads many younger persons to seek out increasingly more “likes”. While this study did not say there exists a correlation between these neurological responses and social curation, it did state that the same areas of the brain lighting up that are associated with pleasure are also those that are associated with “learning and motivating future behavior” (Sherman, Payton, Hernandez, Greenfield, and Dapretto, 2016; Dellinger, 2016). Essentially, seeing a photo that received a ton of “likes” could motivate younger users, whose brains are still developing, to engage in similar behavior.

 

 

On sites like Instagram, which operate on a model that revolves wholly around evaluative features, there has been a rise of so-called “Influencers” (pseudo-professionals in certain interest fields; not quite a nobody, not quite a celebrity) who make their livings off of posting content designed to inspire and influence people. Often, that influence is used to get people to buy certain items or invest in a certain `recreational or marketable endeavor. Sometimes, this influence is used to get people more meaningfully involved in social, political, environmental, etc. issues. Increasingly, though, this influence is being used, whether intentionally or not, to tell people how to live/experience their lives. These perfect snapshots of a kind of hyper-reality are becoming the standard to which people hold themselves personally accountable. If a photo or a tweet or a blog entry or some other kind of social media content does not receive enough engagement, enough “likes” it is often deemed irrelevant in response. On Instagram, if something does not making the trending page, it essentially does not exist. On Youtube, the top twenty videos in terms of views are what appear on the trending page. Everything else is basically eclectic, niche. And, most of this content trends or becomes popular entirely because of evaluative features.

What is most revealing about these evaluative features leading the social curation front is that most of them only provide positive reaction features. Facebook has recently expanded its offering of evaluative features to include several emoji reactions in addition to “likes” and “<3”s but even the most negative is an “angry” emoji reaction face. There is no dislike option. On Instagram and Twitter, one can only “like” a post. While all offer the option to comment as well, comments are usually used comparatively lower to “likes”. These features essentially only allow users to react positively to content or not at all. They create this illusion that the only content that matters is the content that garners the most “likes”, the only people that matter are the ones who receive the most likes. Influencer culture amplifies this message by promoting the idea that one should strive to be the content that is being circulated or else become the subject that curated content revolves around. For all people, but especially the impressionable youth, most of us can probably agree that this is a troubling message. It makes us devalue our own self-worth and rely upon extrinsic factors to legitimize our experiences and decide intrinsic meaning. It’s not enough if we like who we are anymore if the rest of the world does not “like” us.

Additionally, the evaluative features give users the illusion that they are providing us with the ability to express ourselves when, in reality, they are curating our emotional range for us. Facebook offers a range of several reactions which may seem like it is expansive, especially in comparison to other social media sites but, is the human emotional experience composed of several reactions or more? How many words can you name for angry right now? For sad? How many of those words do you feel a singular “angry” emoji captures? Reducing the breadth of the human emotional experience to several options is not a boon and should not be lauded as such. The bar should be higher. The Internet was meant to be this expansive and immersive space where the free exchange of ideas and culture and society can occur. It was meant to provide us with opportunities to expand our thinking and increase our learning about the world by lowering many traditional, constraining boundaries such as time and space. That was the ideal. But, rather than online spaces being these immersive spaces where discovery and disappointment can occur, they are becoming these heavily curated spaces limiting not only our emotional ranges but also changing how we respond to things in ways that can spill over into “real life” (i.e. echo chambers and Influencer culture). We are being socialized to respond positively to content or not at all.

And, so much of this curation is being perpetrated by AI, by machine algorithms. People are not even creating these standards. Bot accounts are racking up “likes” and gaming the system–for a fee, of course (“Social Media: How common is buying likes on social media?”). People are making stupid money off of our insecurities. We are being socialized to believe that it is normal to derive our self-worth and our sincere beliefs from the amount of “likes” a picture or a post receive based upon arbitrary algorithms and bribes by another name. That is problematic.

While it may be tempting to emphasize all of the good features of online spaces and the ways in which online spaces foster and facilitate community, it would be remiss not to address how the same processes that encourage and enable community also enable and embolden the formation of less supportive entities. More, it is increasingly important, now more than ever, perhaps, to bring awareness to institutions that facilitate and incentivize division amongst society. Conflict is profitable, both internal and external. Our emotions, reactions, integrities, and self- esteem are all being commodified in this algorithm-ruled space where the object is to score the most eyeballs through whatever means necessary and damn the consequences of that pursuit. We are all casualties in this world. It is not just problematic; it is scary.

But, how do we address these problems and the issues of online design that so many of them stem from?

How about we take a look at humane design with a dash of metamodernism thrown in for good measure~

Proposing Solutions

So, when it comes to social curation in online spaces, the cat is kind of already out of the bag and napping soundly in its place in the sun. The current state of the Internet is designed around generating profit via advertising and not around ensuring best practice for social well-being. Consequently, the evaluative features that propel social curation have been designed to best generate profit. Influencers have designed their “brands” around accumulating “likes” which translate into dollars. More “liked” content is disseminated more which causes it to #trend and become pervasive in the resounding pop culture sphere, creating that feedback loop we mentioned earlier which is so integral to the perpetuation of socially curated content. To eradicate the more problematic aspects of social curation online would require a complete overhaul of the Internet. That’s not happening anytime soon. At least, it is not happening in the US under the influence of almighty UFCC Chairman Ajit Pai’s infinite wisdom. Apparently you can break the Internet all you want but Ajit Pai forbid you try to fix it…

Anyway, in light of that unfortunate reality, provided are some suggestions for how individual users can be more mindful of social curation at work in online spaces. As the government seems uninterested in taking responsibility for these issues, it seems responsibility falls upon the shoulders of individual users. Most of these suggestions are informed by the Center for Humane Design as well as by the tenets of the contemporary metamodern/post-postmodern art movement which promotes a return to the valuing of dialogue, sincerity, trust, and other ideals which have been swallowed by postmodern irony and apathy (Vermeulen & Van Den Akker, 2010; Turner, 2015; “Metamodernist Manifesto”).

Example of Shia LaBeouf’s take on metamodernism~

So, without further ado:

  • Digital Literacy Education. As we have been saying all semester, in order for any changes to be made in the digital sphere, digital literacy development is key. Greater education absolutely integral for any changes to be made in regards to social curation. Tbh, not many people actually know what social curation is despite it being the driving force behind the Internet as we know it. Like Internet surveillance, if more people were aware of just how much of their Internet experience is being decided by AI and big wig lobbying, they would not agree with it. Awareness and education are the first steps in order for meaningful change to occur.
  • Renavigation & Resituation. Currently, people navigate the world through evaluative features which are the tour de force of social curation. People need to make a concerted effort not to “like” content online. Rather, if it interests or provokes an emotional response, they should state that. Write a comment, make a note, send a message, etc. Actually engage with the content. Make your Internet journeys about leaving impressions rather than about leaving with shallow impressions. Don’t let the Internet make. You make the Internet.
  • Declutter & Consider Your Content Intake. Much of the content created online in social media spaces is designed to profit off of insecurities as well as off of selling unrealistic ideals about life. Some content does this unintentionally and just as a result of existing in a digital, unregulated space like our Internet. Other content is purposefully designed to snag and maim us, reduce us to our faults. Cut that content out. If the purpose of an Influencer’s content is to market off of your insecurities or if a product is using social media in an aggressive manner that is detracting from your life experience, you do not need it in your life or on your feed. Follow Marie Condo’s wise advice and remove that from your life which does not spark joy. You will thank yourself later and you will probably find that your feed and your disposition is more enjoyable and relaxed.
  • Expand Your Critical Intake. Odds are most of your social media and your online sphere is composed of content and ideas that you find similar to your own. That’s understandable. It also creates those pesky filter bubbles that can become full- blown echo chambers. To be healthy and critical consumers of digital content and to develop digital literacy, it is important to be exposed to a wide variety of viewpoints. Tbh, this is important for a well-rounded life. Though it may be slightly vexing at times, do not unfollow your conservative Aunt Karen. You do not have to engage her (which would probably have the opposite effect anyway); just be aware of the variety of viewpoints around you. Associate those viewpoints with people. Remember that disagreements occur between real people, not ideologies or tweets. Critical discourse is not going to solve anything if we do not have the skills or knowledge to engage critically with each other as individuals. Of course, if following someone of a contrary viewpoints becomes toxic to yourself, refer to Suggestion 3.
  • Do Not Be Afraid. For many of us, our social media and, by extension, our social curation practices are integral to both our digital selves and our IRL selves. These spaces and these practices inform large chunks of who we are. Making changes means making changes to how we know ourselves–which can be very scary. But, do we want to live in an online world that does not value us as individuals and is, in fact, designed to devalue us? Is that sense of self we hold onto an accurate representation of who we want to or should be? Or, is it a safety blanket, hiding our own insecurities that we will somehow be lesser is we interact less with online content? If anything, social curation teaches us the power of the individual and of self-actualization. How we choose to perceive the world and what we choose to value about it and ourselves shapes this enterprise. It rises and falls with our input. Imagine how it could change if we suddenly decided that we are more valuable than it? –Because we are. This last suggestion, towards self- esteem, is not purely a digital one but perhaps the most important suggestion in our media-saturated world where “likes” are the currency but self-worth is the cost. If we value ourselves at whole price, social curation cannot discount us.

Conclusion

Ultimately, the Internet in 2019 is kind of a clusterf*ck, especially in its operations. Social curation is both pervasive and singularly significant in how it impacts user experience in online spaces. Much of the research on social curation has focused on the neurological effects and how social curation facilitates the creation of filter bubbles which can lead to increasing chances of radicalization. Little research seems interested in the social ramifications overall and how we as individuals can curtail the negative effects of social curation. The pervasiveness and prevalence of social curation, though, are what what make this issue a pressing problem. Really, social curation is the underlying issue for many problems associated with online spaces in 2019. It is the organizing principle for the Internet after all. Better understanding how it operates and how we contribute to it is key if we want the Internet to be a more progressive and welcoming space. More, if we want to see changes in how we interact and engage with each other IRL, we need to make changes in how we interact and engage with each other and with content online. Social curation is slowly but surely spilling over and bleeding into our real lives, whether we notice or not. China has already established a social currency system based upon interaction with online content. This is not some made-up issue or some passing #trend issue. Rather, this is a contemporary problem that is affecting us right now and a problem that will not improve if we do not improve. The Internet is ours. We reclaim it by reclaiming ourselves.

 

 

Acknowledgements

 

So, like, most of this research could not have been accomplished without the help and insight of my snarky digital alchemist friend, Vlada Slaughter. We met many times on Twitter the Arganee Cafe’s rooftop to chat social curation business. She engaged me and many other #NetNarr friends in conversation about the effects of social curation and how to curtail it. She is the one who found and suggested I check out the Center for Humane Design. We also engaged in commentary about metamodernism and how to approach creating a better, more supportive online experience. Additionally, she shared her knowledge with me about social curation across the web in snarky comments here and there~ I couldn’t have asked for a better guide ^.^ don’t tell her I said that or I’ll never hear the end of it