Posted by z3d in Tech

Elon Musk has done a lot of things to remake Twitter in his own image since purchasing the social media platform last year. He's gotten rid of most of its employees, ditched its longstanding verification program, stopped paying its bills, and alienated many of its advertisers along with its most prolific users. Or, put another way, he's turned one of the most widely-used sources of real-time information and turned it into the chaos platform.

Still, of all the controversial things Musk has done, on Saturday, he may have just topped them all. In a tweet, Musk announced that Twitter was imposing rate limits to "address extreme levels of data scraping & system manipulation."

4

Comments

You must log in or register to comment.

righttoprivacy wrote

An attack on Nitter..

We are seeing the same form of attack: Reddit API limits, Twitter rate limits - breaking Nitter & now Inividious also has legal letter from YT.

An attack on private searching (to enhance tracking?)

2

not_bob wrote

No doubt. They want to monetize everything everyone does.

This is really a problem.

More so, this is why we need to continue to support networks like I2P that enable a person to do things with privacy.

3

righttoprivacy wrote (edited )

1000%.

Once the truly darkside of AI commercialization ripens, more people will start to get it.

2

iop23up wrote (edited )

they want to deny scraping of their data. It is not their data. It's the posters data. What about copyrights by the user. This ai thing would fall down very quickly if you have to create/generate your data by yourself by hiring some volunteers on salary. But what they are now doing in twitter is that these data geerators have to pay money for distributing their data. Quite weird. What is needed is a form of copyright which extend its power to segmented/parameterised usage or maybe allow/disallow training usage. Should be easy to verifiy if you can reprodue the original to a limit which indicates that you had this training data. What do thing you will find in some stable diffusion models if you prompt the right parameters. No surprise to find these things there. The consequence of this would be that these datascraper//trainers need to have look to secure that they haven't been trained on copyright material. This discussin is not new. How much parts does make an original and are they consists of parts that are so small that they are "trivial" information and is the amorphed amount of these parts copyright free and become copyrightwith an instruction how to put these parts together in a/the right way? There has been some apps which tries to circumvent the copyright by fractionize the data to "trivial" parts, if iirc correctly the judges don't saw it that way. The difference here is that the instructions will biuld/reconstruct only one output and not variotions of the data like the ai models.

The i2p model does not help to much against this stealing of data i thing. It just make it slower, but it will make the scraper to run nodes,which is good for i2p :). The anonymity of i2p gains some id mixing as you can't really build one id like with an registered user.

My hope is that the people a tired to jump from one to another, but i don't think they in a state that let them recognize the importants of this cooperation fights for their data and access and a huge necessity to have a non big money/data alternative. Which they have to pay/contribute the development/maintainance for.

I don't know if these tech provide the same features as these big cooperation solutions, but i don't think that should be the goal either. The niche is that people need something they scan rely to if they are abandon some service. The question is always how to connect again. That's the niche i think of. Something like: Twitter has deleted my post, but it is still on i2p-x (or bitmassage,i2pemail fed, etc.), here is the link, should be ok as long as 2+ nodes are running... or: Here the link to my always up linklist on i2p-x (i know youc can do this right now by own http server/railroad), here you can find my other locations.

What about something that allows to have a universial post. Using images? If i post i would generate a jpg. It puts metadata from the post like postnumber, reference, keys, hashs into the metadata of the image. The image itsself would be a screenshot of the post(maybe watermarked). The raw post data will be put into the image as stego encrypted or not. This could be useful for reposting etc. and is not format specific. The user will generate it by posting/sharing and stores/send it anywhere for linking/reference. This method will produce bigger post size for sure, but could be acceptable if to find it at all is more important. And you can read the post with every image viewer and do not need to touch any forum/messageservice/app to read it.

1