Recent comments in /f/Privacy
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
The Threat of the Status Quo
Two practical issues loom over the future of targeting and effective, focused U.S. national security actions: data overload and targeter enablement.
The New Stovepipes
Since the 9/11 Commission Report, intelligence “stovepipes” became part of the American lexicon and reflected bureaucratic turf wars and politics. Information wasn’t shared between agencies that could have increased the probability that the attack could have been detected and prevented. Today, volumes of information are shared between agencies, exponentially more per month is collected and shared than what was in the months before 9/11. Ten years ago, a targeter pursuing a high value target (HVT) – say the leader of a terrorist group – couldn’t find, let alone analyze, all of the information of potential value to the manhunt. Too much poorly organized data means the targeter cannot possibly conduct a thorough analysis at the speed the mission demands. Details are missed, opportunities lost, patterns misidentified, mistakes made. The disorganization and walling off of data for security purposes means new stovepipes have appeared, not between agencies, but between datasets-often within the same agency. As the data volume grows, these challenges have also grown.
Authors have been writing about the issue of data overload in the national security space for years now. Unfortunately, progress to manage the issue or offer workable solutions has been modest, at best. Data of a variety of types and formats, structured and unstructured, flows into USG repositories every hour; 24/7/365. Every year it grows exponentially. In the very near future, there should be little doubt, the USG will collect against foreign 5G, IoT, advanced satellite internet, and adversary databases in the terabyte, petabyte, exabyte, or larger realm. The ingestion, processing, parsing, and sensemaking challenges of these data loads will be like nothing anyone has ever faced before.
Let’s illustrate the issue with a notional comparison.
The U.S. military in 2008, raided an al-Qa’ida safehouse in Iraq and recovered a laptop with a 1GB hard drive. The data on the hard drive was passed to a targeter for analysis. It contained a variety of documents, photos, and video. It took several hours and the help of a linguist, but the targeter was able to identify several leads and items of interest that would advance the fight against al-Qa’ida.
The Afghan Government in 2017, raided an al-Qa’ida media house and recovered over 40TB of data. The data on the hard drives was passed to a targeter for analysis. It contained a variety of documents, photos, and video. Let’s be nice to our targeter and say, only a quarter of the 40TB is video – that’s still as much as 5,000 hours. That’s 208 days of around-the-clock video review and she still hasn’t been able to review the documents, audio, or photos. Obviously, this workload is impossible given the pace of her mission, so she’s not going to do that. Her and her team only look for a handful of specific documents and largely discard the rest.
Let’s say the National Security Agency in 2025, collected 1.4 petabytes of leaked Chinese Government emails and attachments. Our targeter and all of her teammates could easily spend the rest of their careers reviewing the data using current methods and tools.
In real life, the raid on Usama Bin Ladin’s compound produced over 250GB of material. It took an interagency task force in 2011 many months to manually comb through the data and identify material of interest. These examples shed light on only a subset of data overload. Keep in mind, this DOCEX is only one source our targeter has to review to get a full picture of her target and network. She’s also looking through all of the potentially relevant collected HUMINT, SIGINT, IMINT, OSINT, etc. that could be related to her target. That’s many more datasets, often stovepipes within stovepipes, with the same outmoded tools and methods.
This leads us to our second problem, human enablement.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
The Emergent System After 9/11: Data Isn’t a Problem, Using It Is
In the wake of the 9/11 attacks, the U.S. intelligence community and the Department of Defense poured billions into intelligence collection. Data was collected from around the world in a variety of forms to prevent new terrorist attacks against the U.S. homeland. Every conceivable relevant detail of information that could prevent an attack or hunt down those responsible for attack plotting was collected. Simply put, the United States does not suffer from a lack of data. The emerging capability gap between Beijing and Washington is the processing of this data that allows for the identification of details and patterns that are relevant to America’s national security needs.
Historically, the traditional intersection of data collection, analysis, and national defense were a cadre of people in the intelligence community and the Department of Defense known as analysts. A bottom-up evolution started after 9/11, has revolutionized how analysis is done and to what end. As data supplies grew and new demands for analysis emerged, the cadre began to cleave. The traditional cadre remained focused on strategic needs: warning policymakers and informing them of the plans and intentions of America’s adversaries. The new demands were more detailed and tactical, and the focus was on enabling operations, not informing the President. Who, specifically, should the U.S. focus its collection against? What member of a terrorist group should the U.S. military target and where does he live, what time does he drive to meet his buddies? This new, distinct cadre of professionals rose to meet the new demand – they became known as targeters.
The targeter is a detective who pieces together the life of a subject or network in excruciating detail: their schedule, their family, their social contacts, their interests, their possessions, their behavior, and so on. The targeter does all of this to understand the subject so well that they can assess their subject’s importance in their organization and predict their behavior and motivation. They also make reasoned and supported arguments as to where to place additional intelligence collection resources against their target to better understand them and their network, or what actions the USG or our allies should take against the target to diminish their ability to do harm.
The day-to-day responsibilities of a targeter include combing through intelligence collection, be it reporting from a spy in the ranks of al-Qa’ida, a drug cartel, or a foreign government (HUMINT); collection of enemy communications (SIGINT); images of a suspicious location or object (IMINT); review of social media, publications, news reports, etc.(OSINT); or materials captured by U.S. military or partner country forces during raids against a specific target, location, or network member (DOCEX). Using all of the information available, the targeter looks for specific details that will help assess their subject or networks and predict behaviors.
As more and more of the cadre cleaved into this targeter role, agencies began to formalize their roles and responsibilities. Data piled up and more targeters were needed. As this emergent system was being formalized into the bureaucracy, it quickly became overwhelmed by the volumes of data. Too few tools existed to exploit the datasets. Antiquated security orthodoxy surrounding how data is stored and accessed disrupting the targeter’s ability to find links. The bottom-up innovation stalled. Even within the most sophisticated and well-supported environments for targeting in the U.S. Government, the problem has persisted and is growing worse. Without attention and resolution, these issues may make the system obsolete.
onion OP wrote
Reply to A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
Eric Washabaugh served as a targeting and technology manager at the CIA, where he served from 2006 – 2019, leading multiple inter-agency and multi-disciplinary targeting teams focused on al-Qa’ida, ISIS, and al-Shabaab at CIA’s Counterterrorism Center (CTC). He is currently the Vice President of Mission Success at Anno.Ai, where he oversees multiple machine learning-focused development efforts across the government space.
PERSPECTIVE — As the U.S. competes with Beijing and addresses a host of national security needs, U.S. defense will require more speed, not less, against more data than ever before. The current system cannot support the future. Without robots, we’re going to fail.
News articles in recent years detailing the rise of China’s technology sector have highlighted the country’s increased focus on advanced computing, artificial intelligence, and communication technologies. The country’s five year plans have increasingly focused on meeting and exceeding western standards, while constructing reliable, internal supply chains and research and development for artificial intelligence (AI). A key driver of this advancement are Beijing’s defense and intelligence goals.
Beijing’s deployment of surveillance in their cities, online, and financial spaces has been well documented. There should be little doubt that many of these implementations are being mined for direct or analogous uses in the intelligence and defense spaces. Beijing has been vacuuming up domestic data, mining the commercial deployment of their technology abroad, and has collected vast amounts of information on Americans, especially those in the national security space.
The goal behind this collection? The development, training, and retraining of machine learning models to enhance Beijing’s intelligence collection efforts, disrupt U.S. collection, and identify weak points in U.S. defenses. Recent reports clearly reflect the scale and focus of this effort – the physical relocation of national security personnel and resources to Chinese datacenters to mine massive collections to disrupt U.S. intelligence collection. Far and away, the Chinese exceed all other U.S. adversaries in this effort.
As the new administration begins to shape its policies and goals, we’re seeing typical media focus on political appointees, priority lists, and overall philosophical approaches but what we need is an intense focus on the intersection of data collection and artificial intelligence if the U.S. is to remain competitive and counter this rising threat.
Elbmar wrote (edited )
Reply to The FBI Should Stop Attacking Encryption and Tell Congress About All the Encrypted Phones It’s Already Hacking Into by Rambler
I was surprised by this
“Law enforcement… use these tools to investigate cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses,” Upturn reports.
This other article it linked to about consent searches was interesting too.
Imagine this scenario: You’re driving home. Police pull you over, allegedly for a traffic violation. After you provide your license and registration, the officer catches you off guard by asking: “Since you’ve got nothing to hide, you don’t mind unlocking your phone for me, do you?” Of course, you don’t want the officer to copy or rummage through all the private information on your phone. But they’ve got a badge and a gun, and you just want to go home. If you’re like most people, you grudgingly comply.
Police use this ploy, thousands of times every year, to evade the Fourth Amendment’s requirement that police obtain a warrant, based on a judge’s independent finding of probable cause of crime, before searching someone’s phone.
https://www.eff.org/deeplinks/2021/01/so-called-consent-searches-harm-our-digital-rights
Elbmar OP wrote (edited )
Reply to Flashback: After 8chan users migrated to Zeronet, their IP addresses were exposed by The Daily Beast by Elbmar
This is old news but it was news to me. I remember the 8chan users migrating to Zeronet in 2019 but I was not aware that so many had their IP addresses exposed.
Peer-to-peer networks expose a user’s internet address to anyone who cares to look. That’s how copyright lawyers catch people trading movies, music and software, and it’s how police and FBI agents arrest pedophiles trading child porn online.
ZeroNet works the same way, a fact that’s been much-discussed on the new site. For that reason, ZeroNet integrates tightly with Tor, an anonymity system that places layers of cut-out addresses between a user and the websites they visit. But only 41 percent of 08chan’s users’ are using Tor, based on our analysis of the peer-to-peer traffic at the site.
Users on 08chan have been complaining that the site is buggy and slow over Tor, and the site’s own administrator initially encouraged anons to just connect directly.
...
The Daily Beast captured 819 IP addresses for 08chan users connecting from 62 different countries.
Also, the users were concerned about child porn
“Say someones a f****t and uploads cp [child porn],” one Zeronet user wondered on Wednesday, as 8chan users flooded Zeronet’s discussion board. “If i happen to download it, it gets shared from my computer, right? And if i dont notice it bunch of people can download it from me? So im a distrubutor at that point, arent i?”
I think the main takeaways from this is that an ideal network for users of 8chan, Parler etc. to migrate to would be
- Anonymous by default, and fast regardless.
- Text only, so that no one worries about accidentally downloading and distributing child porn.
I'm aware that even a regular web browser downloads child porn when a user views a page with it. But anyone who comes across child porn is required by law to report it. That puts people in a weird legal position if they come across it by accident while using a network like zeronet. Legally and morally, the correct thing to do is to remove it from the zites you are distributing and then report it but reporting might bring more attention to yourself even if you use the anonymous report feature, especially since you were briefly a distributor.
I think using a peer to peer network is one thing they got right, because there's no individual that can be threatened, coerced, or persuaded into shutting 08chan down.
Elbmar wrote
Reply to comment by Wahaha in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
People can delete their messages but I haven't seen it happen enough that it really bothers me.
Yeah it's preferable for news stories to remain up forever. Maybe IPFS could eventually become popular enough that news organizations use it as well. But in the meantime archivists can use it to archive news stories permanently. I agree that it's important for news articles, scientific articles, statements from politicians etc. to not be memoryholed. But ideally, right wing groups should use private anonymous networks with auto-disappearing messages because it's safer. Members being targeted by law enforcement has a much worse effect on a group than any negatives that might come from people deleting their own messages.
Wahaha wrote
Reply to comment by Elbmar in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
If you're participating in a discussion and then memory hole your contributions, nobody can read up on the discussion, since part of it is missing. You could also write up a news story and then memory hole it yourself, if you feel like it.
The ability to remove something you published can be used maliciously. Thus, one of the points of decentralization is to prevent anyone from even having that ability.
Elbmar wrote
Reply to comment by Wahaha in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
Not sure what malicious use would be. I haven't ever seen the type of drama where someone says something, deletes it, and then denies ever saying it and gets into arguments with people about it.
Ultimately, advantages are subjective for different people. You value posts existing forever but many people prefer the opposite. Signal is popular partially because of the disappearing messages feature. I think especially on the right, people will increasingly value privacy over convenience. I think we are probably heading into a very totalitarian, technocratic future where it will be more and more dangerous to have right wing views.
Personally, if I see a very interesting post online, I sometimes just save it in a document on my computer. If scuttlebutt implements the delete message feature, it would be nice for them to also have a save message feature that saves the message but not the username. Or allow users to just remove their identity from messages that they don't want associated with themselves any more. Similar to how reddit shows [deleted] for the username after someone deletes an account.
Patchwork and apps like it could agree to not show deleted messages in their user interface. That way, if someone was making backups, it would be harder to read deleted messages. It would still be possible, but the person doing it would need to know how to decrypt them. Don't know if that would be a desired feature by the community or not, but it would be a way to get the delete feature as complete as possible.
Wahaha wrote
Reply to comment by Elbmar in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
I can see why people would want that feature, but it wouldn't change that somebody would have the ability to memory hole something, which isn't desirable, since it can be used maliciously and thus has the ability to harm trust.
If I can't trust for everything to remain there forever, there's no big advantage over centralized solutions.
Luckily, by design, all the content I see ends up saved on my computer, so with a differential backup, it should be trivial to go back in time and read memory holed posts.
Wahaha wrote
Reply to comment by !deleted846 in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
GDPR only applies to personal data. Whatever you posted is still fair game. Especially if it was under a pseudonym in the first place. It's different from the "right to be forgotten".
Also, on a technological level this process isn't automated. Someone has to go in there, make sure it's your data and delete it manually from the database. It could be automated in the future, but it wasn't in the past and without building everything from scratch again, it also won't be in the future.
Also, I'm an IT guy from Europe that is very fortunate that no one ever asked for shit to be deleted. But on the bright side, even if somebody did, there's still no way for them to verify that we actually deleted everything. So reasonably, all we have to do is to no longer expose their information and nobody would be any the wiser.
Elbmar wrote (edited )
Reply to comment by Wahaha in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
I think the main advantage of decentralized over centralized is that other people can't memory hole your posts. If you can memory hole your own posts, that is an advantage. If you ever get in trouble with the law, it's helpful to have no online history that they know about. Ideally, they will not know your username, but the right is too online now compared to the left. The right really should be using the internet to facilitate offline organizing more often, and that introduces the possibility of law enforcement knowing your online identity. But for example, if you are defending yourself from Antifa and get charged with assault, you may be happy if you deleted all your posts before meeting up with people so nothing you said can be twisted and used against you (though they might say it's suspicious that you deleted all your posts. It's nice that in Matrix, changing your password encrypts all your old posts by default, which looks less suspicious). The NSA or FBI could certainly still have the posts you deleted and know that you made them but local law enforcement is not so sophisticated.
I think you could have scuttlebutt or something like it, which stores all messages for you to read offline, but also have a feature where if you say that you want all of your posts deleted, then your computer could send that message out to all of your peers. They would forward that message to any of their peers who can also read your messages. (See the "Follow Graph" here https://ssbc.github.io/scuttlebutt-protocol-guide/#follow-graph ) The peers that are already online would respond immediately and delete your posts from their local store. Some of your peers and peers of peers with access to your posts could be offline so they would still retain your posts temporarily, but when they connect to the internet again, those peers would see that you want your posts deleted, either by checking with you or their peer who is connected to you, and they would immediately delete them as well.
In the scuttlebutt documentation I saw that in the future they do want to allow people to delete posts and it is just a feature they haven't implemented yet. They also want to hide IP addresses by default.
We want Scuttlebutt to be a safe cozy place but there are still some things we need to fix: Blocked people can see your public messages.
Content from blocked people is still on your computer. (This is almost fixed!)
Patchwork has some bugs that let you see blocked people in certain situations when they should be hidden
Scuttlebutt doesn’t provide IP address anonymity by itself, but you can use it with a VPN or Tor.
Messages can’t be deleted yet.
https://scuttlebutt.nz/docs/introduction/detailed-start/#stay-happy-and-safe
Wahaha wrote
Reply to comment by Elbmar in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
You wouldn't have to do anything complicated like that. Just create regular differential backups of everything, then you can go back in time and see the posts again. One of the points of decentralized networks is that you can still read everything, even without Internet. So if you design it in a way that requires an internet connection to read posts, it's no longer decentralized.
Another point is, that the reason people want to use decentralized solutions is so that nobody has the ability to memory hole anything. Not even typos. If that's not the case, then what's the advantage over centralized stuff?
Elbmar wrote
Reply to How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
Matrix is federated, not p2p, but when using it I noticed that if I changed my password, the encryption key for my posts would change as well which would make all of my past posts unreadable to everyone including myself, but my new posts would be readable. Of course if my past password was weak, it would still be easy for someone to decrypt my past posts.
It was possible to delete and edit posts as well. And if you disabled an account, you were met with a warning saying that people would not be able to read your past posts, which may disrupt the flow of conversations. Also, creators of a room could set it up so that any new user had no ability to view the old posts in the room. You could change your display name at any time, but your unique id is the name you chose when signing up. Your unique id is visible to anyone who right clicks on your display name.
When it comes to p2p tech, so far everyone is saying what you are suggesting is impossible, but I am at least interested to know whether it would make sense to code something similar to this, or if something similar already exists:
All posts are encrypted. nodes you connect to store your posts, but in encrypted form, and they store the encryption key for your posts. They store a generated unique id, not your display name. So if someone wants to save your posts to use against you, they have to have some basic technical capability. They need to know your account's unique id, not display name, and use the stored key to decrypt the posts associated with that id. (Most would just screenshot it in this case, which can be more easily faked so there is more plausible deniability for you)
You can change your encryption key at any time. If you change the encryption key for your posts, then the key will be changed for all nodes connected to you, making your past posts unreadable to yourself and connected nodes
if any node disconnects from you or you disconnect from it, your files automatically get deleted from their store and their files get automatically deleted from your store.
If someone really wanted to hold on to someone's posts to use against them later, they could of course make a copy of the store before they disconnect from the other node, but they would need some basic tech knowledge to decrypt what is in it. Unlike making an archive link of some centralized page which requires almost no tech knowledge. If the p2p network gets popular enough, someone might make a service to simplify this process for people (similar to archive.org). But privacy would at least be comparable to centralized services.
But I know jack shit about coding p2p protocols and applications.
dontvisitmyintentions wrote
Reply to comment by Rambler in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
Decentralization by means of replication eliminates the power to control that data entirely, in exchange for dissemination. The way to distance yourself from your posts is the same as on an image board: create a new pseudonymous persona, or maintain no persona at all.
In federated systems, nodes rely less on local stores, so deleting data from a node may work better. It helps make Mastodon/Pleroma confusing and fragmented because instances capriciously block other nodes and users without any signal that's happening. The result is users subscribe to multiple nodes lest their conversions be mangled by getting muted by third parties.
Federated systems could be more friendly and work with users' idea of privacy, but that requires them not to abuse the powers which they abuse now. There's no future for it in wide-spread society, and any smaller group you trust to not abuse it, you can also trust to not abuse your posts.
Wahaha wrote
Reply to How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
The entire point of decentralization is to make exactly this impossible. The promise is that no one even has the ability to memory hole anything.
The right to be forgotten isn't granted in the centralized world, either. On a technical level, all that happens is that what you posted gets hidden. Easily retrievable ten years down the line, if someone with access wanted to. The reasons for that are legal in nature, as far as I know. So if it's a small site without a bunch of lawyers in the background, you might have a chance to get your stuff actually deleted. Especially if the one who operates it likes the concept of privacy. But as a user, you have no way to verify either way.
Since decentralization redistributes power from a single source to everyone, in a decentralized network everyone has that ability. Of course, everyone would first have to agree on hiding the content in the first place.
I don't really get why people want this "right" anyway. It doesn't exist in real life. All your records are kept and all the people involved will remember. Imagine if Donald Trump would say "guys, I really want to be forgotten online, please delete everything mentioning my name". That would be ridiculous, wouldn't it?
Rambler OP wrote
Reply to comment by dontvisitmyintentions in How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
That's kind of what I've gathered but hopefully I get hit with some knowledge. My understanding is only very basic of it. And I still hop on Zeronet / Aether and lurk. I know other, similar networks exist too.
I'm not shitting on those types of networks, they certainly have value that centralized networks do not. Not sure if there is a good 'in-between' where a user/individual still retains the ability to control the data they've published after clicking "submit".
dontvisitmyintentions wrote
Reply to How do P2P, decentralized networks work when it comes to a user or individual wanting to remove their information from it? by Rambler
Only if the users and nodes cooperate. So, no.
LnWpxtqPEXyDjAH9rs27 OP wrote
Reply to comment by BlackWinnerYoshi in Directory of services that send encrypted and/or signed E-mail notifications by LnWpxtqPEXyDjAH9rs27
I was just asking if there was a nonblog page. Anyways, I added it.
BlackWinnerYoshi wrote
Reply to Thoughts on Starlink (in regards to privacy) by Rambler
TL;DR: in regards to privacy, Starlink is... not so great.
Well, let's see what Starlink's situation is, in regards to privacy:
- Tor support - I didn't actually order Starlink, but it looks like it doesn't block Tor when I just visit the site.
- Monero acceptance - I guess it doesn't support cryptocurrency, as per Starlink Pre-Order Agreement (clear net only), paragraph two, point three.
- No personal data required for registration - I don't know where to register (I guess I would need to purchase Starlink to see), but if one of the recovery methods (clear net only) is by phone, that's already suspicious.
- Compatibility with established standards - this could apply because of built-in VPN support (OpenVPN or possibly WireGuard) and encryption of e-mails you get (PGP). In case of e-mail encryption, there's no mention of it, and in case of VPN, there's also no mention of it, and might possibly be disallowed by SpaceX.
- No Cloudflare - it looks like there's no Clownflare or some other MITM.
- As little downtime as possible - not a privacy issue, but the service actually has to be usable. Since SpaceX is so massive, I doubt downtimes are much of a problem.
So I guess just by looking at those six points, it's kind of average. But of course, this alone only tells the minimum, so let's see the privacy policy (clear net only):
- IP addresses - paragraph one, points six to seven, mention them, but they don't mention for how long the information is stored, only as to why they store them in paragraph two, point three, analytics being the reason.
- Content data - paragraph two, point one, letter five, might suggest they could watch things like messages, e-mails, search queries, to detect "fraud".
- System info - paragraph one, point six, mentions that operating system and platform, browser type and version, time zone setting and location, are collected.
- Metadata - I think that the data collected as per paragraph one, point seven, might apply to metadata.
- Interaction data - paragraph one, point six, also mentions that the interaction with their services is collected.
- Third party sharing - paragraph three, mentions that your data will be shared to their "affiliates", government, and organizations involved in business transfers.
Well, that already worsens the situation with Starlink. What about the history of SpaceX? Are they hiding skeletons in their closet? I have no idea, I would have to dig really deeply to find out. And I don't want to do that/
BlackWinnerYoshi wrote
Reply to comment by LnWpxtqPEXyDjAH9rs27 in Directory of services that send encrypted and/or signed E-mail notifications by LnWpxtqPEXyDjAH9rs27
I don't know what do you not understand, it's just that having a PGP key added to your account not only makes the notifications encrypted, but they can also remove 2FA if you send a message about that signed with your PGP key. That's all.
LnWpxtqPEXyDjAH9rs27 OP wrote (edited )
Reply to comment by smartypants in Directory of services that send encrypted and/or signed E-mail notifications by LnWpxtqPEXyDjAH9rs27
The repository is not about these kinds of services. It's about websites that send you email notifications or do email support using encryption/signing. For example, if ramble has a public key, they can sign every email they send you (notifications, password resets, etc.) so you can verify you are not getting phished and the email comes from them. Or if you have sensitive info to send them, you can encrypt it before sending it, regardless if you use fastmail, posteo, tutanota, protonmail, gmail or any other email service.
This also doesn't have to be limited to email communications/notifications. If a website decides to only support notifications through XMPP or any other method, it can still apply, it's just that email is the most widely adopted.
LnWpxtqPEXyDjAH9rs27 OP wrote
Reply to comment by BlackWinnerYoshi in Directory of services that send encrypted and/or signed E-mail notifications by LnWpxtqPEXyDjAH9rs27
Is there a better link explaining it than this blog post? Thanks for the suggestion.
As for GitHub, I know it's owned by Microsoft but I needed git where most people have an account so they can easily contribute. Apart from being owned by Microsoft, they are not behind Cloudflare, they don't use reCaptcha and you can view the README without JavaScript.
LnWpxtqPEXyDjAH9rs27 OP wrote
Reply to comment by Rambler in Directory of services that send encrypted and/or signed E-mail notifications by LnWpxtqPEXyDjAH9rs27
Added it, thanks.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
The Collapsing Emergent System
Much of our targeter’s workday is spent on information extraction and organization, the vast majority of which is, well, robot work. She’ll be repeating manual tasks for most of the day. She knows what she needs to investigate today to continue building her target or network profile. Today it’s a name and a phone number. She has a time consuming, tedious, and potentially error-prone effort ahead of her–a “swivel chair process”–tracking down the name and phone number in multiple databases using a variety of outmoded software tools. She’ll manually investigate her name and phone number in multiple stovepiped databases. She’ll map what she’s found in a network analysis tool, in an electronic document, or <wince> a pen to paper notebook. Now…finally…she will begin to use her brain. She’ll look for patterns, she’ll analyze the data temporally, she’ll find new associations and correlations, and she’ll challenge her assumptions and come to new conclusions. Too bad she spent 80% of her time doing robot work.
This is the problem as it stands today. The targeter is overwhelmed with too much unstructured and stovepiped information and does not have access to the tools required to clean, sift, sort and process massive amounts of data. And remember, the system she operates is about to receive exponentially more data. Absent change, a handful of things are almost certain to happen:
More raw data will be collected than is actually relevant, and as a result will increase the stress on infrastructure to store all of that data for future analysis. Infrastructure (technical and process related) will continue to fail to make raw data available to technologists and targeters to begin processing at a mission relevant pace. Targeters and analysts will continue to perform manual tasks that take the majority of their time, leaving little time for actual analysis and delivery of insights. The timeline from data to information, to insights, to decision making is extended exponentially as data exponentially increases. Insights as a result of correlations between millions of raw data points will be missed entirely, leading to incorrect targets being identified, missed targets or patterns, or targets with inaccurate importance being prioritized first. This may seem banal or weedy, but it should be very concerning. This system – how the United States processes the information it collects to identify and prevent threats – will not work in the very near future. The data stovepipes of the 2020s can result in a surprise or catastrophe like the institutional stovepipes of the 1990s; it won’t be a black swan. As the U.S. competes with Beijing, its national defense will require more speed, not less, against more data than ever before. It will require evaluating data and making connections and correlations faster than a human can. It will require the effective processing of this mass of data to identify precision solutions that reduce the scope of intervention to achieve our goals, while minimizing harm. Our current and future national defense needs our targeter to be motivated, enabled, and effective.