Recent comments in /f/Privacy
AWiggerInTime wrote
Reply to comment by Imperator in Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Matrix itself is decent, but the official software is utter shit.
Element is a bloated electron mess that's somehow bigger than pisscord and it's buggy as all hell (from small UI bugs to losing connection/not receving messages). Don't get me started on the mobile version. Oh and fun fact, even though olm is implemented in C so it can run natively on pretty much anything, desktop Element still goes through wasm for EVERY MESSAGE, because the devs are retarded enough to not be able to link a binary to a release exec.
The server is even worse, even installing this piece of shit can be a challenge (especially out of the Linux comfort zone) and it hogs EVERYTHING. Say goodbye to like 3 GBs of RAM for a few rooms and users. Say goodbye to your disk space & cpu because python.
The only thing they haven't fucked up yet is Dendrite, the second-gen server which actually looks promising, but it's still in beta it's probably too early to call.
Rambler OP wrote
Reply to comment by Kalchaya in Why We Absolutely Must Ban Private Use of Facial Recognition by Rambler
Not according to me getting into my workplace. Glasses, hat, mask . Sometimes have to pull my mask down just a tad to reveal the very top of my nose , the bridge, near the eyes.
Its scary .
Kalchaya wrote
So long as masks are de rigueur, facial recognition is pretty much a nonissue. Add some sunglasses with a hat, and the tech is dead in the water.
Kalchaya wrote
Reply to Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Any time you have to download an app to use something, you should be concerned. Apps and anonymity tend not to coexist.
smartypants wrote
Reply to comment by Wahaha in Stalker 'found Japanese singer through reflection in her eyes' by onion
Its a common Trope since 1982 scifi movies and stories.
"ZOOM... ENHANCE!"
such as a scene in Blade Runner
Wahaha wrote
Reply to comment by Rambler in Why We Absolutely Must Ban Private Use of Facial Recognition by Rambler
What's the threat scenario of some random company acquiring your face? I think of privacy as a safety feature, so if I can't think of a threat, I have a harder time caring.
That and my passion is archiving, so innately deleting data is somewhat uncomfortable for me.
Toxicant wrote
Reply to comment by Imperator in Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Element.io is fantastic
Rambler OP wrote
Reply to comment by Wahaha in Why We Absolutely Must Ban Private Use of Facial Recognition by Rambler
My concern is more private use. I get my face scanned to enter my workplace, and the (biometrics) company state that they retain that for up to 3 years beyond end of employment.
To me, that's up to 3 years too long.
And I don't "mind" it, so long as that information was stored locally and could be purged by HR when an employee ls no longer employed, as part of an after-employment checklist. For example, if you have a company with 700 active employees, then on your LAN you have the biometric hardware/software operating and it contains no more than 700 faces, and doesn't face anything public, as it's only used to allow/deny entry to the building. Doesn't need a web facing control panel, no need to store that data 'in the cloud', etc.
But, that's not how things are done. The biometric company could be bought up by another. It could be hacked. It could be secretly funded by any alphabet agency or sharing data with them.
If it was private use, open source, localized installs across companies and company owned worksites... no problem.
As far as public stuff goes? I'm kind of with you. I have cameras. I use them. Moreso when I lived in the city. Shortly after installation I thought all the hoodlums were casing cars on the street because they were walking in the street instead of on my sidewalk. Turns out they noticed the cameras and thought they were out of view of them if they just walk in the middle of the road. Nope, I still see ya buddy.
Wahaha wrote
As much of a privacy nightmare as it is, I kinda dream of a city with high-resolution security cams featuring facial recognition covering every public space, even the sewers. But they would be accessible to everyone, so you can watch it yourself. It could be cooler than reality TV.
Also, I never was too concerned with privacy in public. The problem is how the system can be abused in the future, but then everyone is more or less keeping a tracking device on their body and publishing their opinions on the Internet, so I'm not sure if facial recognition could be abused to do something that isn't already possible anyway.
Maybe people would finally stop littering, if there are cams identifying and fining them automagically.
Wahaha wrote
Have you seen the show Higashi no Eden? Friends of the protagonist created an app that would let them identify everything, people included. Everyone had the ability to identify new things and add to the database. It was a pretty neat tool, but utterly futuristic back in 2009 when the show aired. That was about when smartphones became common.
And it looked a lot like that screenshot from the site.
The concept was kinda dwarfed by the real point of the show, which was a mobile phone with a billion or so and an operator doing tasks for you by using that money. Like shooting rockets or shipping all shut-ins off to Africa or something like that. Good fun.
BlackWinnerYoshi wrote
Reply to Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Well, while open source does not mean it's secure, this is still a weird thing to do.
I would simply recommend to stop using Signal and start using XMPP with OMEMO encryption, since this is the gold standard of instant messengers, at least for me. You should especially stop using Signal because it requires your phone number, which immediately disqualifies it for a private messenger.
Imperator wrote
Reply to Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Have you tried element.io and Matrix? Been using it for years now and I'm very happy with it. Clients for all kinds of platforms and bridges to all kinds of networks exist.
KeeJef wrote
Reply to Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Yes lol, the client is making calls to endpoints on the server which don't even exist in the publicly released code. Saying all messages are encrypted avoids the question of metadata and how the server actually deals with that metadata.
onion OP wrote
Reply to comment by Wahaha in Stalker 'found Japanese singer through reflection in her eyes' by onion
It's the same story, but figured it was worth posting even though it's old.
Yeah, I remember seeing a picture taken from one of those extremely high def security cameras a few years ago. It was amazing how far you could zoom in. Maybe this was it? I don't know. I can't see it since I'm using tor.
Wahaha wrote
Again or is that the story from a couple years back?
Anyway, all these TV shows were ahead of their time, with their infinite zoom that is now at least somewhat feasible.
Just think about how good security cameras can be these days, zooming in on what a driver across the street is reading. Same with satellites.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
Innovating the System
To overcome the exponential growth in data and subsequent stovepiping, the IC doesn’t need to hire armies of 20-somethings to do around-the-clock analysis in warehouses all over northern Virginia. It needs to modernize its security approach to connect these datasets, and apply a vast suite of machine learning models and other analytics to help targeters start innovating. Now. Technological innovations are also likely to lead to more engaged, productive, and energized targeters who spend their time applying their creativity and problem-solving skills, and spend less time doing robot work. We can’t afford to lose any more trained and experienced targeters to this rapidly fatiguing system.
The current system as discussed, is one of unvalidated data collection and mass storage, manual loading, mostly manual review, and robotic swivel chair processes for analysis.
The system of the future breaks down data stovepipes and eliminates the manual and swivel chair robot processes of the past. The system of the future automates data triage, so users can readily identify datasets of interest for deep manual research. It automates data processing, cleaning, correlations and target profiling – clustering information around a potential identity. It helps targeters identify patterns and suggests areas for future research.
How do current and emerging analytic and ML techniques bring us to the system of the future and better enable our targeter? Here are four ideas to start with:
Automated Data Triage: As data is fed into the system, a variety of analytics and ML pipelines are applied. A typical exploratory data analysis (EDA) report is produced (data size, file types, temporal analysis, etc.). Additionally, analytics ingest, clean and standardize the data. ML and other approaches identify languages, set aside likely irrelevant information, summarize topics and themes, and identify named entities, phone numbers, email addresses, etc. This first step aids in validating data need, enables an improved search capability, and sets a new foundation for additional analytics and ML approaches. There are seemingly countless examples across the U.S. national security space. Automated Correlation: Output from numerous data streams is brought into an abstraction layer and prepped for next generation analytics. Automated correlation is applied across a variety of variables: potential name matches, facial recognition and biometric clustering, phone number and email matches, temporal associations, and locations. Target Profiling: Network, Spatial, and Temporal Analytics: As the information is clustered, our targeter now sees associations pulled together by the computer. The robot, leveraging its computational speed along with machine learning for rapid comparison and correlation, has replaced the swivel chair process. Our targeter is now investigating associations, validating the profile, refining the target’s pattern-of-life. She is coming to conclusions about the target faster and more effectively and is bringing more value to the mission. She’s also providing feedback to the system, helping to refine its results. AI Driven Trend and Pattern Analysis: Unsupervised ML approaches can help identify new patterns and trends that may not fit into the current framing of the problem. These insights can challenge groupthink, identify new threats early, and find insights that our targeters may not even know to look for. Learning User Behavior: Our new system shouldn’t just enable our targeter, it should learn from her. Applying ML behind the scenes that monitors our targeter can help drive incremental improvements of the system. What does she click on? Did she validate or refute a machine correlation? Why didn’t she explore a dataset that may have had value to her investigation and analysis? The system should learn and adapt to her behavior to better support her. Her tools should highlight where data may be that could have value to her work. It should also help train new hires. Let’s be clear, we’re far from the Laplace’s demon of HBO’s “Westworld” or FX’s “Devs”: there is no super machine that will replace the talented and dedicated folks that make up the targeting cadre. Targeters will remain critical to evaluating and validating these results, doing deep research, and applying their human creativity and problem solving. The national security space hires brilliant and highly educated personnel to tackle these problems, let’s challenge and inspire them, not relegate them to the swivel chair processes of the past.
We need a new system to handle the data avalanche and support the next generation. Advanced computing, analytics, and applied machine learning will be critical to efficient data collection, successful data exploitation, and automated triage, correlation, and pattern identification. It’s time for a new chapter in how we ingest, process, and evaluate intelligence information. Let’s move forward.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
The Collapsing Emergent System
Much of our targeter’s workday is spent on information extraction and organization, the vast majority of which is, well, robot work. She’ll be repeating manual tasks for most of the day. She knows what she needs to investigate today to continue building her target or network profile. Today it’s a name and a phone number. She has a time consuming, tedious, and potentially error-prone effort ahead of her–a “swivel chair process”–tracking down the name and phone number in multiple databases using a variety of outmoded software tools. She’ll manually investigate her name and phone number in multiple stovepiped databases. She’ll map what she’s found in a network analysis tool, in an electronic document, or <wince> a pen to paper notebook. Now…finally…she will begin to use her brain. She’ll look for patterns, she’ll analyze the data temporally, she’ll find new associations and correlations, and she’ll challenge her assumptions and come to new conclusions. Too bad she spent 80% of her time doing robot work.
This is the problem as it stands today. The targeter is overwhelmed with too much unstructured and stovepiped information and does not have access to the tools required to clean, sift, sort and process massive amounts of data. And remember, the system she operates is about to receive exponentially more data. Absent change, a handful of things are almost certain to happen:
More raw data will be collected than is actually relevant, and as a result will increase the stress on infrastructure to store all of that data for future analysis. Infrastructure (technical and process related) will continue to fail to make raw data available to technologists and targeters to begin processing at a mission relevant pace. Targeters and analysts will continue to perform manual tasks that take the majority of their time, leaving little time for actual analysis and delivery of insights. The timeline from data to information, to insights, to decision making is extended exponentially as data exponentially increases. Insights as a result of correlations between millions of raw data points will be missed entirely, leading to incorrect targets being identified, missed targets or patterns, or targets with inaccurate importance being prioritized first. This may seem banal or weedy, but it should be very concerning. This system – how the United States processes the information it collects to identify and prevent threats – will not work in the very near future. The data stovepipes of the 2020s can result in a surprise or catastrophe like the institutional stovepipes of the 1990s; it won’t be a black swan. As the U.S. competes with Beijing, its national defense will require more speed, not less, against more data than ever before. It will require evaluating data and making connections and correlations faster than a human can. It will require the effective processing of this mass of data to identify precision solutions that reduce the scope of intervention to achieve our goals, while minimizing harm. Our current and future national defense needs our targeter to be motivated, enabled, and effective.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
The Threat of the Status Quo
Two practical issues loom over the future of targeting and effective, focused U.S. national security actions: data overload and targeter enablement.
The New Stovepipes
Since the 9/11 Commission Report, intelligence “stovepipes” became part of the American lexicon and reflected bureaucratic turf wars and politics. Information wasn’t shared between agencies that could have increased the probability that the attack could have been detected and prevented. Today, volumes of information are shared between agencies, exponentially more per month is collected and shared than what was in the months before 9/11. Ten years ago, a targeter pursuing a high value target (HVT) – say the leader of a terrorist group – couldn’t find, let alone analyze, all of the information of potential value to the manhunt. Too much poorly organized data means the targeter cannot possibly conduct a thorough analysis at the speed the mission demands. Details are missed, opportunities lost, patterns misidentified, mistakes made. The disorganization and walling off of data for security purposes means new stovepipes have appeared, not between agencies, but between datasets-often within the same agency. As the data volume grows, these challenges have also grown.
Authors have been writing about the issue of data overload in the national security space for years now. Unfortunately, progress to manage the issue or offer workable solutions has been modest, at best. Data of a variety of types and formats, structured and unstructured, flows into USG repositories every hour; 24/7/365. Every year it grows exponentially. In the very near future, there should be little doubt, the USG will collect against foreign 5G, IoT, advanced satellite internet, and adversary databases in the terabyte, petabyte, exabyte, or larger realm. The ingestion, processing, parsing, and sensemaking challenges of these data loads will be like nothing anyone has ever faced before.
Let’s illustrate the issue with a notional comparison.
The U.S. military in 2008, raided an al-Qa’ida safehouse in Iraq and recovered a laptop with a 1GB hard drive. The data on the hard drive was passed to a targeter for analysis. It contained a variety of documents, photos, and video. It took several hours and the help of a linguist, but the targeter was able to identify several leads and items of interest that would advance the fight against al-Qa’ida.
The Afghan Government in 2017, raided an al-Qa’ida media house and recovered over 40TB of data. The data on the hard drives was passed to a targeter for analysis. It contained a variety of documents, photos, and video. Let’s be nice to our targeter and say, only a quarter of the 40TB is video – that’s still as much as 5,000 hours. That’s 208 days of around-the-clock video review and she still hasn’t been able to review the documents, audio, or photos. Obviously, this workload is impossible given the pace of her mission, so she’s not going to do that. Her and her team only look for a handful of specific documents and largely discard the rest.
Let’s say the National Security Agency in 2025, collected 1.4 petabytes of leaked Chinese Government emails and attachments. Our targeter and all of her teammates could easily spend the rest of their careers reviewing the data using current methods and tools.
In real life, the raid on Usama Bin Ladin’s compound produced over 250GB of material. It took an interagency task force in 2011 many months to manually comb through the data and identify material of interest. These examples shed light on only a subset of data overload. Keep in mind, this DOCEX is only one source our targeter has to review to get a full picture of her target and network. She’s also looking through all of the potentially relevant collected HUMINT, SIGINT, IMINT, OSINT, etc. that could be related to her target. That’s many more datasets, often stovepipes within stovepipes, with the same outmoded tools and methods.
This leads us to our second problem, human enablement.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
The Emergent System After 9/11: Data Isn’t a Problem, Using It Is
In the wake of the 9/11 attacks, the U.S. intelligence community and the Department of Defense poured billions into intelligence collection. Data was collected from around the world in a variety of forms to prevent new terrorist attacks against the U.S. homeland. Every conceivable relevant detail of information that could prevent an attack or hunt down those responsible for attack plotting was collected. Simply put, the United States does not suffer from a lack of data. The emerging capability gap between Beijing and Washington is the processing of this data that allows for the identification of details and patterns that are relevant to America’s national security needs.
Historically, the traditional intersection of data collection, analysis, and national defense were a cadre of people in the intelligence community and the Department of Defense known as analysts. A bottom-up evolution started after 9/11, has revolutionized how analysis is done and to what end. As data supplies grew and new demands for analysis emerged, the cadre began to cleave. The traditional cadre remained focused on strategic needs: warning policymakers and informing them of the plans and intentions of America’s adversaries. The new demands were more detailed and tactical, and the focus was on enabling operations, not informing the President. Who, specifically, should the U.S. focus its collection against? What member of a terrorist group should the U.S. military target and where does he live, what time does he drive to meet his buddies? This new, distinct cadre of professionals rose to meet the new demand – they became known as targeters.
The targeter is a detective who pieces together the life of a subject or network in excruciating detail: their schedule, their family, their social contacts, their interests, their possessions, their behavior, and so on. The targeter does all of this to understand the subject so well that they can assess their subject’s importance in their organization and predict their behavior and motivation. They also make reasoned and supported arguments as to where to place additional intelligence collection resources against their target to better understand them and their network, or what actions the USG or our allies should take against the target to diminish their ability to do harm.
The day-to-day responsibilities of a targeter include combing through intelligence collection, be it reporting from a spy in the ranks of al-Qa’ida, a drug cartel, or a foreign government (HUMINT); collection of enemy communications (SIGINT); images of a suspicious location or object (IMINT); review of social media, publications, news reports, etc.(OSINT); or materials captured by U.S. military or partner country forces during raids against a specific target, location, or network member (DOCEX). Using all of the information available, the targeter looks for specific details that will help assess their subject or networks and predict behaviors.
As more and more of the cadre cleaved into this targeter role, agencies began to formalize their roles and responsibilities. Data piled up and more targeters were needed. As this emergent system was being formalized into the bureaucracy, it quickly became overwhelmed by the volumes of data. Too few tools existed to exploit the datasets. Antiquated security orthodoxy surrounding how data is stored and accessed disrupting the targeter’s ability to find links. The bottom-up innovation stalled. Even within the most sophisticated and well-supported environments for targeting in the U.S. Government, the problem has persisted and is growing worse. Without attention and resolution, these issues may make the system obsolete.
onion OP wrote
Reply to A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
Eric Washabaugh served as a targeting and technology manager at the CIA, where he served from 2006 – 2019, leading multiple inter-agency and multi-disciplinary targeting teams focused on al-Qa’ida, ISIS, and al-Shabaab at CIA’s Counterterrorism Center (CTC). He is currently the Vice President of Mission Success at Anno.Ai, where he oversees multiple machine learning-focused development efforts across the government space.
PERSPECTIVE — As the U.S. competes with Beijing and addresses a host of national security needs, U.S. defense will require more speed, not less, against more data than ever before. The current system cannot support the future. Without robots, we’re going to fail.
News articles in recent years detailing the rise of China’s technology sector have highlighted the country’s increased focus on advanced computing, artificial intelligence, and communication technologies. The country’s five year plans have increasingly focused on meeting and exceeding western standards, while constructing reliable, internal supply chains and research and development for artificial intelligence (AI). A key driver of this advancement are Beijing’s defense and intelligence goals.
Beijing’s deployment of surveillance in their cities, online, and financial spaces has been well documented. There should be little doubt that many of these implementations are being mined for direct or analogous uses in the intelligence and defense spaces. Beijing has been vacuuming up domestic data, mining the commercial deployment of their technology abroad, and has collected vast amounts of information on Americans, especially those in the national security space.
The goal behind this collection? The development, training, and retraining of machine learning models to enhance Beijing’s intelligence collection efforts, disrupt U.S. collection, and identify weak points in U.S. defenses. Recent reports clearly reflect the scale and focus of this effort – the physical relocation of national security personnel and resources to Chinese datacenters to mine massive collections to disrupt U.S. intelligence collection. Far and away, the Chinese exceed all other U.S. adversaries in this effort.
As the new administration begins to shape its policies and goals, we’re seeing typical media focus on political appointees, priority lists, and overall philosophical approaches but what we need is an intense focus on the intersection of data collection and artificial intelligence if the U.S. is to remain competitive and counter this rising threat.
Elbmar wrote (edited )
Reply to The FBI Should Stop Attacking Encryption and Tell Congress About All the Encrypted Phones It’s Already Hacking Into by Rambler
I was surprised by this
“Law enforcement… use these tools to investigate cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses,” Upturn reports.
This other article it linked to about consent searches was interesting too.
Imagine this scenario: You’re driving home. Police pull you over, allegedly for a traffic violation. After you provide your license and registration, the officer catches you off guard by asking: “Since you’ve got nothing to hide, you don’t mind unlocking your phone for me, do you?” Of course, you don’t want the officer to copy or rummage through all the private information on your phone. But they’ve got a badge and a gun, and you just want to go home. If you’re like most people, you grudgingly comply.
Police use this ploy, thousands of times every year, to evade the Fourth Amendment’s requirement that police obtain a warrant, based on a judge’s independent finding of probable cause of crime, before searching someone’s phone.
https://www.eff.org/deeplinks/2021/01/so-called-consent-searches-harm-our-digital-rights
Elbmar OP wrote (edited )
Reply to Flashback: After 8chan users migrated to Zeronet, their IP addresses were exposed by The Daily Beast by Elbmar
This is old news but it was news to me. I remember the 8chan users migrating to Zeronet in 2019 but I was not aware that so many had their IP addresses exposed.
Peer-to-peer networks expose a user’s internet address to anyone who cares to look. That’s how copyright lawyers catch people trading movies, music and software, and it’s how police and FBI agents arrest pedophiles trading child porn online.
ZeroNet works the same way, a fact that’s been much-discussed on the new site. For that reason, ZeroNet integrates tightly with Tor, an anonymity system that places layers of cut-out addresses between a user and the websites they visit. But only 41 percent of 08chan’s users’ are using Tor, based on our analysis of the peer-to-peer traffic at the site.
Users on 08chan have been complaining that the site is buggy and slow over Tor, and the site’s own administrator initially encouraged anons to just connect directly.
...
The Daily Beast captured 819 IP addresses for 08chan users connecting from 62 different countries.
Also, the users were concerned about child porn
“Say someones a f****t and uploads cp [child porn],” one Zeronet user wondered on Wednesday, as 8chan users flooded Zeronet’s discussion board. “If i happen to download it, it gets shared from my computer, right? And if i dont notice it bunch of people can download it from me? So im a distrubutor at that point, arent i?”
I think the main takeaways from this is that an ideal network for users of 8chan, Parler etc. to migrate to would be
- Anonymous by default, and fast regardless.
- Text only, so that no one worries about accidentally downloading and distributing child porn.
I'm aware that even a regular web browser downloads child porn when a user views a page with it. But anyone who comes across child porn is required by law to report it. That puts people in a weird legal position if they come across it by accident while using a network like zeronet. Legally and morally, the correct thing to do is to remove it from the zites you are distributing and then report it but reporting might bring more attention to yourself even if you use the anonymous report feature, especially since you were briefly a distributor.
I think using a peer to peer network is one thing they got right, because there's no individual that can be threatened, coerced, or persuaded into shutting 08chan down.
Elbmar wrote
Reply to comment by Wahaha in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
People can delete their messages but I haven't seen it happen enough that it really bothers me.
Yeah it's preferable for news stories to remain up forever. Maybe IPFS could eventually become popular enough that news organizations use it as well. But in the meantime archivists can use it to archive news stories permanently. I agree that it's important for news articles, scientific articles, statements from politicians etc. to not be memoryholed. But ideally, right wing groups should use private anonymous networks with auto-disappearing messages because it's safer. Members being targeted by law enforcement has a much worse effect on a group than any negatives that might come from people deleting their own messages.
Wahaha wrote
Reply to comment by Elbmar in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
If you're participating in a discussion and then memory hole your contributions, nobody can read up on the discussion, since part of it is missing. You could also write up a news story and then memory hole it yourself, if you feel like it.
The ability to remove something you published can be used maliciously. Thus, one of the points of decentralization is to prevent anyone from even having that ability.
Imperator wrote
Reply to comment by AWiggerInTime in Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Installing Synapse with docker and a TLS reverse proxy is a relative breeze. Like almost all server software, it requires some setup and general LInux knowledge. I haven't personally noted a lot of performance issues, but I concur that choosing Python (they even started with version 2) was a bad design choice. Good for prototyping but definitely not suitable for large-scale production usage. Hopefully Dendrite will reach feature parity soon. Moreover, they're doing some serious work on the p2p end and a working client exists already (https://p2p.riot.im).
I don't think Element has a bad UI, but there's definitely some room for improvement. Am not a fan of their use of HTML/CSS/JavaScript, I would have preferred a Rust GTK/Qt client but I understand that at this point in the project stage it's important to support the widest variety of platforms to serve the largest possible userbase. Performance and optimisation can always come later.