The first thing you need to appreciate here is that rather than gaslight the capabilities, Elon admits that the algorithm is not yet that smart. But as I read this, and considering that X is the only non-professional social media channel that I have been browsing lately, I started thinking about a few ways that this capability can be built into this algorithm.

The simplest way is to add a dislike button. There is already a feature that says, “I don’t want to see this.” However, I am not sure if this does just the job of removing that post from your timeline or captures that data point as a sentiment. The dislike button can take care of the content from a specific source. Leveraging this logic, the algorithm can predict with a reasonable level of accuracy that you are forwarding content that you dislike and can ignore the forward from those sources as a data point when evaluating likes.
I have never forwarded content on X but if you are allowed to add text when you forward, the sentiment of that text is another data point. For example, If the text says “I love this”, the forward is a positive forward. If the text says “ This is pathetic”, you know that the person forwarding did not love the content.
But there is a way to make this more sophisticated.
AI can already capture what an image or video is about. It can generate labels and detailed descriptions for images and videos. Social media algorithms already capture profiles of users based on their engagement. That is what the feed suggestions are all about. Showing you the content you may like or dislike, based on your profile, which in turn is based on your activities. What needs to be done is to link image and video sentiment with user sentiment.
As an example, a propaganda video belittling a specific religion in a subtle way, can still be captured as a content around the theme “ critical of religion XYZ”. If user A generally shares and likes content that applauds religion XYZ, an algorithm can easily interpret that the user would like to see the critical content. Comments of the user on such critical content can also be leveraged as data points.Hence, if the user still ends up seeing a critical content, and forwards that to their friends, the algorithm can discard that data point.
Last but the most advanced method is to build a more accurate sentiment theme for images and videos. That may not be worth the effort from a social media perspective since the previous methods may do the job. But this approach can be extremely useful in other applications. More on this in a separate post.

