Recent comments
smartypants wrote
Reply to comment by Wahaha in Stalker 'found Japanese singer through reflection in her eyes' by onion
Its a common Trope since 1982 scifi movies and stories.
"ZOOM... ENHANCE!"
such as a scene in Blade Runner
smartypants wrote
Reply to Commercialized Penis Envy by Wahaha
DOZENS of companies make these for women to stand up to pee:
- SheWee
- GoGirl FUD
- Whiz Freedom
- Gotta Tinkle
- Peecock
- Pee-Zee
- Peequality
- The Stand Up
- The Travel Jane
- PeeBuddy
- Ms Whiz
- Mr. Limpy
- Mr. Fenis
- The P-Mate
- KleanGo
- Travel John
- Lady J
- P-EZ
- LadyP
- The pStyle ® Reusable
- Pibella
- Freshette
- Chickpea
- AquaEve
- TinkleBelle
... and about 10 more.
Rambler wrote
Reply to This is Why I no Longer Frequent Reddit by HMTg927
Good writeup. Reddit is and has been cancerous for a while.
BlackWinnerYoshi wrote
Reply to Commercialized Penis Envy by Wahaha
archive.is doesn't like Tor for some reason, so here's an archive.org link (clear net only)
burnerben wrote
Reply to comment by zab_ in Mexico Set to Legalize Marijuana, Becoming World’s Largest Market by Rambler
just easier money laundering
Wahaha wrote
Reply to comment by Rambler in Why We Absolutely Must Ban Private Use of Facial Recognition by Rambler
What's the threat scenario of some random company acquiring your face? I think of privacy as a safety feature, so if I can't think of a threat, I have a harder time caring.
That and my passion is archiving, so innately deleting data is somewhat uncomfortable for me.
zab_ wrote
I wonder how the cartels feel about this. By now they must have prepared legal outlets for their product which are ready to go live on day 1, so it legalization will probably not hurt their bottom line.
Rambler wrote
Reply to Is it too late to start posting cat pics here? by Wahaha
Never too late.
Toxicant wrote
Reply to comment by Imperator in Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Element.io is fantastic
Rambler OP wrote
Reply to comment by Wahaha in Why We Absolutely Must Ban Private Use of Facial Recognition by Rambler
My concern is more private use. I get my face scanned to enter my workplace, and the (biometrics) company state that they retain that for up to 3 years beyond end of employment.
To me, that's up to 3 years too long.
And I don't "mind" it, so long as that information was stored locally and could be purged by HR when an employee ls no longer employed, as part of an after-employment checklist. For example, if you have a company with 700 active employees, then on your LAN you have the biometric hardware/software operating and it contains no more than 700 faces, and doesn't face anything public, as it's only used to allow/deny entry to the building. Doesn't need a web facing control panel, no need to store that data 'in the cloud', etc.
But, that's not how things are done. The biometric company could be bought up by another. It could be hacked. It could be secretly funded by any alphabet agency or sharing data with them.
If it was private use, open source, localized installs across companies and company owned worksites... no problem.
As far as public stuff goes? I'm kind of with you. I have cameras. I use them. Moreso when I lived in the city. Shortly after installation I thought all the hoodlums were casing cars on the street because they were walking in the street instead of on my sidewalk. Turns out they noticed the cameras and thought they were out of view of them if they just walk in the middle of the road. Nope, I still see ya buddy.
Wahaha wrote
As much of a privacy nightmare as it is, I kinda dream of a city with high-resolution security cams featuring facial recognition covering every public space, even the sewers. But they would be accessible to everyone, so you can watch it yourself. It could be cooler than reality TV.
Also, I never was too concerned with privacy in public. The problem is how the system can be abused in the future, but then everyone is more or less keeping a tracking device on their body and publishing their opinions on the Internet, so I'm not sure if facial recognition could be abused to do something that isn't already possible anyway.
Maybe people would finally stop littering, if there are cams identifying and fining them automagically.
Wahaha wrote
Have you seen the show Higashi no Eden? Friends of the protagonist created an app that would let them identify everything, people included. Everyone had the ability to identify new things and add to the database. It was a pretty neat tool, but utterly futuristic back in 2009 when the show aired. That was about when smartphones became common.
And it looked a lot like that screenshot from the site.
The concept was kinda dwarfed by the real point of the show, which was a mobile phone with a billion or so and an operator doing tasks for you by using that money. Like shooting rockets or shipping all shut-ins off to Africa or something like that. Good fun.
BlackWinnerYoshi wrote
Reply to Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Well, while open source does not mean it's secure, this is still a weird thing to do.
I would simply recommend to stop using Signal and start using XMPP with OMEMO encryption, since this is the gold standard of instant messengers, at least for me. You should especially stop using Signal because it requires your phone number, which immediately disqualifies it for a private messenger.
Imperator wrote
Reply to comment by J0yI9YUX41Wx in Watch: "Ethics" Professor Says Americans Will Take Vaccine in Exchange for Return of Freedom by Elbmar
Yeah. It's not ideal from a perspective of equality and solidarity but what other option is there? I guess the choice is either nobody having freedom or the restrictions being loosened up for those who become immune to the virus and contribute to group immunity.
Imperator wrote
Reply to Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Have you tried element.io and Matrix? Been using it for years now and I'm very happy with it. Clients for all kinds of platforms and bridges to all kinds of networks exist.
KeeJef wrote
Reply to Signal's open sourced server code hasn't been updated for over a year. Should we be concerned? by Rambler
Yes lol, the client is making calls to endpoints on the server which don't even exist in the publicly released code. Saying all messages are encrypted avoids the question of metadata and how the server actually deals with that metadata.
J0yI9YUX41Wx wrote
Reply to Watch: "Ethics" Professor Says Americans Will Take Vaccine in Exchange for Return of Freedom by Elbmar
Sounds like a reasonable trade to me.
onion OP wrote
Reply to comment by Wahaha in Stalker 'found Japanese singer through reflection in her eyes' by onion
It's the same story, but figured it was worth posting even though it's old.
Yeah, I remember seeing a picture taken from one of those extremely high def security cameras a few years ago. It was amazing how far you could zoom in. Maybe this was it? I don't know. I can't see it since I'm using tor.
Wahaha wrote
Again or is that the story from a couple years back?
Anyway, all these TV shows were ahead of their time, with their infinite zoom that is now at least somewhat feasible.
Just think about how good security cameras can be these days, zooming in on what a driver across the street is reading. Same with satellites.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
Innovating the System
To overcome the exponential growth in data and subsequent stovepiping, the IC doesn’t need to hire armies of 20-somethings to do around-the-clock analysis in warehouses all over northern Virginia. It needs to modernize its security approach to connect these datasets, and apply a vast suite of machine learning models and other analytics to help targeters start innovating. Now. Technological innovations are also likely to lead to more engaged, productive, and energized targeters who spend their time applying their creativity and problem-solving skills, and spend less time doing robot work. We can’t afford to lose any more trained and experienced targeters to this rapidly fatiguing system.
The current system as discussed, is one of unvalidated data collection and mass storage, manual loading, mostly manual review, and robotic swivel chair processes for analysis.
The system of the future breaks down data stovepipes and eliminates the manual and swivel chair robot processes of the past. The system of the future automates data triage, so users can readily identify datasets of interest for deep manual research. It automates data processing, cleaning, correlations and target profiling – clustering information around a potential identity. It helps targeters identify patterns and suggests areas for future research.
How do current and emerging analytic and ML techniques bring us to the system of the future and better enable our targeter? Here are four ideas to start with:
Automated Data Triage: As data is fed into the system, a variety of analytics and ML pipelines are applied. A typical exploratory data analysis (EDA) report is produced (data size, file types, temporal analysis, etc.). Additionally, analytics ingest, clean and standardize the data. ML and other approaches identify languages, set aside likely irrelevant information, summarize topics and themes, and identify named entities, phone numbers, email addresses, etc. This first step aids in validating data need, enables an improved search capability, and sets a new foundation for additional analytics and ML approaches. There are seemingly countless examples across the U.S. national security space. Automated Correlation: Output from numerous data streams is brought into an abstraction layer and prepped for next generation analytics. Automated correlation is applied across a variety of variables: potential name matches, facial recognition and biometric clustering, phone number and email matches, temporal associations, and locations. Target Profiling: Network, Spatial, and Temporal Analytics: As the information is clustered, our targeter now sees associations pulled together by the computer. The robot, leveraging its computational speed along with machine learning for rapid comparison and correlation, has replaced the swivel chair process. Our targeter is now investigating associations, validating the profile, refining the target’s pattern-of-life. She is coming to conclusions about the target faster and more effectively and is bringing more value to the mission. She’s also providing feedback to the system, helping to refine its results. AI Driven Trend and Pattern Analysis: Unsupervised ML approaches can help identify new patterns and trends that may not fit into the current framing of the problem. These insights can challenge groupthink, identify new threats early, and find insights that our targeters may not even know to look for. Learning User Behavior: Our new system shouldn’t just enable our targeter, it should learn from her. Applying ML behind the scenes that monitors our targeter can help drive incremental improvements of the system. What does she click on? Did she validate or refute a machine correlation? Why didn’t she explore a dataset that may have had value to her investigation and analysis? The system should learn and adapt to her behavior to better support her. Her tools should highlight where data may be that could have value to her work. It should also help train new hires. Let’s be clear, we’re far from the Laplace’s demon of HBO’s “Westworld” or FX’s “Devs”: there is no super machine that will replace the talented and dedicated folks that make up the targeting cadre. Targeters will remain critical to evaluating and validating these results, doing deep research, and applying their human creativity and problem solving. The national security space hires brilliant and highly educated personnel to tackle these problems, let’s challenge and inspire them, not relegate them to the swivel chair processes of the past.
We need a new system to handle the data avalanche and support the next generation. Advanced computing, analytics, and applied machine learning will be critical to efficient data collection, successful data exploitation, and automated triage, correlation, and pattern identification. It’s time for a new chapter in how we ingest, process, and evaluate intelligence information. Let’s move forward.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
The Collapsing Emergent System
Much of our targeter’s workday is spent on information extraction and organization, the vast majority of which is, well, robot work. She’ll be repeating manual tasks for most of the day. She knows what she needs to investigate today to continue building her target or network profile. Today it’s a name and a phone number. She has a time consuming, tedious, and potentially error-prone effort ahead of her–a “swivel chair process”–tracking down the name and phone number in multiple databases using a variety of outmoded software tools. She’ll manually investigate her name and phone number in multiple stovepiped databases. She’ll map what she’s found in a network analysis tool, in an electronic document, or <wince> a pen to paper notebook. Now…finally…she will begin to use her brain. She’ll look for patterns, she’ll analyze the data temporally, she’ll find new associations and correlations, and she’ll challenge her assumptions and come to new conclusions. Too bad she spent 80% of her time doing robot work.
This is the problem as it stands today. The targeter is overwhelmed with too much unstructured and stovepiped information and does not have access to the tools required to clean, sift, sort and process massive amounts of data. And remember, the system she operates is about to receive exponentially more data. Absent change, a handful of things are almost certain to happen:
More raw data will be collected than is actually relevant, and as a result will increase the stress on infrastructure to store all of that data for future analysis. Infrastructure (technical and process related) will continue to fail to make raw data available to technologists and targeters to begin processing at a mission relevant pace. Targeters and analysts will continue to perform manual tasks that take the majority of their time, leaving little time for actual analysis and delivery of insights. The timeline from data to information, to insights, to decision making is extended exponentially as data exponentially increases. Insights as a result of correlations between millions of raw data points will be missed entirely, leading to incorrect targets being identified, missed targets or patterns, or targets with inaccurate importance being prioritized first. This may seem banal or weedy, but it should be very concerning. This system – how the United States processes the information it collects to identify and prevent threats – will not work in the very near future. The data stovepipes of the 2020s can result in a surprise or catastrophe like the institutional stovepipes of the 1990s; it won’t be a black swan. As the U.S. competes with Beijing, its national defense will require more speed, not less, against more data than ever before. It will require evaluating data and making connections and correlations faster than a human can. It will require the effective processing of this mass of data to identify precision solutions that reduce the scope of intervention to achieve our goals, while minimizing harm. Our current and future national defense needs our targeter to be motivated, enabled, and effective.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
The Threat of the Status Quo
Two practical issues loom over the future of targeting and effective, focused U.S. national security actions: data overload and targeter enablement.
The New Stovepipes
Since the 9/11 Commission Report, intelligence “stovepipes” became part of the American lexicon and reflected bureaucratic turf wars and politics. Information wasn’t shared between agencies that could have increased the probability that the attack could have been detected and prevented. Today, volumes of information are shared between agencies, exponentially more per month is collected and shared than what was in the months before 9/11. Ten years ago, a targeter pursuing a high value target (HVT) – say the leader of a terrorist group – couldn’t find, let alone analyze, all of the information of potential value to the manhunt. Too much poorly organized data means the targeter cannot possibly conduct a thorough analysis at the speed the mission demands. Details are missed, opportunities lost, patterns misidentified, mistakes made. The disorganization and walling off of data for security purposes means new stovepipes have appeared, not between agencies, but between datasets-often within the same agency. As the data volume grows, these challenges have also grown.
Authors have been writing about the issue of data overload in the national security space for years now. Unfortunately, progress to manage the issue or offer workable solutions has been modest, at best. Data of a variety of types and formats, structured and unstructured, flows into USG repositories every hour; 24/7/365. Every year it grows exponentially. In the very near future, there should be little doubt, the USG will collect against foreign 5G, IoT, advanced satellite internet, and adversary databases in the terabyte, petabyte, exabyte, or larger realm. The ingestion, processing, parsing, and sensemaking challenges of these data loads will be like nothing anyone has ever faced before.
Let’s illustrate the issue with a notional comparison.
The U.S. military in 2008, raided an al-Qa’ida safehouse in Iraq and recovered a laptop with a 1GB hard drive. The data on the hard drive was passed to a targeter for analysis. It contained a variety of documents, photos, and video. It took several hours and the help of a linguist, but the targeter was able to identify several leads and items of interest that would advance the fight against al-Qa’ida.
The Afghan Government in 2017, raided an al-Qa’ida media house and recovered over 40TB of data. The data on the hard drives was passed to a targeter for analysis. It contained a variety of documents, photos, and video. Let’s be nice to our targeter and say, only a quarter of the 40TB is video – that’s still as much as 5,000 hours. That’s 208 days of around-the-clock video review and she still hasn’t been able to review the documents, audio, or photos. Obviously, this workload is impossible given the pace of her mission, so she’s not going to do that. Her and her team only look for a handful of specific documents and largely discard the rest.
Let’s say the National Security Agency in 2025, collected 1.4 petabytes of leaked Chinese Government emails and attachments. Our targeter and all of her teammates could easily spend the rest of their careers reviewing the data using current methods and tools.
In real life, the raid on Usama Bin Ladin’s compound produced over 250GB of material. It took an interagency task force in 2011 many months to manually comb through the data and identify material of interest. These examples shed light on only a subset of data overload. Keep in mind, this DOCEX is only one source our targeter has to review to get a full picture of her target and network. She’s also looking through all of the potentially relevant collected HUMINT, SIGINT, IMINT, OSINT, etc. that could be related to her target. That’s many more datasets, often stovepipes within stovepipes, with the same outmoded tools and methods.
This leads us to our second problem, human enablement.
onion OP wrote
Reply to comment by onion in A former CIA "targeter" explains the issues that targeters run into and how AI can help (content as comment chain within for people who don't want to visit a national security blog) by onion
The Emergent System After 9/11: Data Isn’t a Problem, Using It Is
In the wake of the 9/11 attacks, the U.S. intelligence community and the Department of Defense poured billions into intelligence collection. Data was collected from around the world in a variety of forms to prevent new terrorist attacks against the U.S. homeland. Every conceivable relevant detail of information that could prevent an attack or hunt down those responsible for attack plotting was collected. Simply put, the United States does not suffer from a lack of data. The emerging capability gap between Beijing and Washington is the processing of this data that allows for the identification of details and patterns that are relevant to America’s national security needs.
Historically, the traditional intersection of data collection, analysis, and national defense were a cadre of people in the intelligence community and the Department of Defense known as analysts. A bottom-up evolution started after 9/11, has revolutionized how analysis is done and to what end. As data supplies grew and new demands for analysis emerged, the cadre began to cleave. The traditional cadre remained focused on strategic needs: warning policymakers and informing them of the plans and intentions of America’s adversaries. The new demands were more detailed and tactical, and the focus was on enabling operations, not informing the President. Who, specifically, should the U.S. focus its collection against? What member of a terrorist group should the U.S. military target and where does he live, what time does he drive to meet his buddies? This new, distinct cadre of professionals rose to meet the new demand – they became known as targeters.
The targeter is a detective who pieces together the life of a subject or network in excruciating detail: their schedule, their family, their social contacts, their interests, their possessions, their behavior, and so on. The targeter does all of this to understand the subject so well that they can assess their subject’s importance in their organization and predict their behavior and motivation. They also make reasoned and supported arguments as to where to place additional intelligence collection resources against their target to better understand them and their network, or what actions the USG or our allies should take against the target to diminish their ability to do harm.
The day-to-day responsibilities of a targeter include combing through intelligence collection, be it reporting from a spy in the ranks of al-Qa’ida, a drug cartel, or a foreign government (HUMINT); collection of enemy communications (SIGINT); images of a suspicious location or object (IMINT); review of social media, publications, news reports, etc.(OSINT); or materials captured by U.S. military or partner country forces during raids against a specific target, location, or network member (DOCEX). Using all of the information available, the targeter looks for specific details that will help assess their subject or networks and predict behaviors.
As more and more of the cadre cleaved into this targeter role, agencies began to formalize their roles and responsibilities. Data piled up and more targeters were needed. As this emergent system was being formalized into the bureaucracy, it quickly became overwhelmed by the volumes of data. Too few tools existed to exploit the datasets. Antiquated security orthodoxy surrounding how data is stored and accessed disrupting the targeter’s ability to find links. The bottom-up innovation stalled. Even within the most sophisticated and well-supported environments for targeting in the U.S. Government, the problem has persisted and is growing worse. Without attention and resolution, these issues may make the system obsolete.
Rambler wrote
Reply to Commercialized Penis Envy by Wahaha
Pretty common in the backpacking, off grid, van life, etc world.
Standing up to pee is great. I don't blame women for wanting to be able to do that. Still a goofy product, but it's popular and gets the job done I guess.