B2B Blog – JoinDeleteMe https://joindeleteme.com Tue, 10 Jun 2025 13:27:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 How to Prevent Social Engineering In the Workplace in 2025 https://joindeleteme.com/business/blog/how-to-prevent-social-engineering-in-the-workplace-in-2025/ Wed, 14 May 2025 16:12:21 +0000 https://joindeleteme.com/?post_type=b2b-post&p=17462 TL;DR: To prevent social engineering in the workplace, take a layered approach to security. In other words, no single solution works 100% of the time. Combine technical safeguards, employee training, robust reporting mechanisms, and personal data removal from online exposure sources. 

  • DeleteMe removes employees’ personal data from online sources that criminals often use to personalize their social engineering campaigns. 

The guide below explores practical strategies to protect your organization from social engineering threats. 

4 Steps to Prevent Social Engineering In the Workplace

None of these steps on their own will stop social engineering, but combined, they will give organizations a robust posture against anyone who wants to trick their employees and executives into sharing sensitive information or enabling threats. 

1. Don’t rely on email filtering

As many as 85% of all emails are malicious. Email security solutions should block at least some social engineering attempts included in this number before they reach employees. 

However, even though these tools typically use sophisticated algorithms to identify suspicious patterns, such as unusual sender addresses, malicious links, or attachments that could contain malware, they are far from perfect. 

It’s effectively impossible to prevent social engineering emails at the email client level–especially when criminals use compromised legitimate email accounts (as is the case in at least 10% of social engineering campaigns).

2. Do social engineering training at a granular level and test as regularly as possible

Social engineering training programs can educate employees on how to recognize social engineering tactics. But only if it feels real.

Training must be updated regularly to keep up with evolving social engineering techniques. For example, cybersecurity researchers reported a significant increase in vishing (phone-based social engineering techniques) attacks in 2024. 

We advise companies to focus on storytelling. Share as many real-world social engineering examples as possible, ideally relevant to the kind of jobs people are doing at your organization, and show them the consequences of taking a lax approach to security. 

Of course, it will be harder to do this when the employees in question are executives, but it’s still essential.

Phishing simulations are great, too, but again, always consider (and communicate) these as organizational-level exercises and not as attempts to test individuals. 

Test the company as a whole and at the departmental level against:

  • Social engineering tactics like phishing, baiting, and pretexting. 
  • Vishing campaigns that impersonate IT support staff to gain access to sensitive information. 
  • Social engineering campaigns that impersonate employees to the organization’s IT support desk. 

Go department by department, and you can see where exactly social engineering risk is at its worst. 

Many companies find that their most vulnerable employees are often those actually working in security or IT roles. These people tend to be highly targeted for their relatively elevated network permissions. 

3. Build a security reporting culture

Adding on to the previous point, we have to reiterate that safe organizations are open ones, i.e., those where when something looks dangerous, no one has a second thought about raising a red flag.

However, the ground reality is that in many organizations, employees either do not know how to communicate potential security risks or don’t feel like they should.

We saw some pretty explicit examples of this in a recent report by KnowBe4:

  • 38% of employees still hesitate to report security concerns because they don’t know how.
  • 31% of employees still hesitate to report security concerns because they find it too difficult.
  • 20% of employees still hesitate to report security concerns because they didn’t want to bother the security team.
  • 1 in 10 employees still hesitate to report security concerns due to fear or uncertainty.

Employees reporting social engineering promptly is very important to security because it minimizes the blast radius and risk of attacks. When someone flags a social engineering attempt, it reduces the likelihood of additional employees falling victim to the same social engineering tactics. 

Prompt reporting also enables IT teams to quickly block malicious emails or communications, update security filters, and implement temporary measures to protect against similar attacks. 

Does your organization have a process for employees to report social engineering attempts? Or securely notify IT if they feel they might have fallen victim to an attack?

4. Remove employee personal data removal from online exposure sources

This is what we specialize in doing, and it is the easiest and potentially highest ROI tip in this article. 

Generic social engineering campaigns can be relatively easy to spot and stop. It’s the personalized attacks that you need to worry about most. 

If employees think an email came from someone they know, they’re likely to act on it – even if, in the IT/security team’s eyes, there are multiple “red flags” that should have warned them the email (or text, call, etc.) was bogus. 

As one IT person shared, email security controls help, but social engineering scams still trick their employees. 

“One instance was someone impersonating an existing vendor to our Finance dept with a phony invoice and “Oh, by the way we changed our payment details, please send the payment via ACH to this new account. 

Another was someone impersonating an executive to our HR department wanting to change their direct deposit info.

In both cases the from display names were slightly altered / misspelled in order to avoid the impersonation attempt tag, but the from email addresses were clearly bogus. One was a Gmail address and the other was a gibberish domain.

In both cases, we ended up losing several thousand dollars.”

It’s not hard for criminals to launch these kinds of personal attacks, either. 

Attackers can find employee information for social engineering through employee social profiles, public records, corporate websites, and data brokers and people search sites.

Data brokers and people search sites pull employees’ information from various sources into one place. 

People search sites publish people’s (and your employees’) personal information like their phone number, home address, family member names, links to personal social media profiles, etc.

B2B data brokers publish information about organizations and employees, including org charts, employee education and work histories. 

We know from leaked criminal chat transcripts that attackers use data brokers, likely to find social engineering targets and names to “name drop” within these campaigns to make them more believable. 

As one person says, “Social engineering is ultimately the same art of exploiting as hacking, you need to know your target first and how to approach it in order to succeed.” 

To reduce criminals’ ability to target employees with this kind of information, it’s critical to remove employees’ data from these sources. 

People search sites and data brokers allow people to “opt out” of their databases. However, the opt-out process varies from one broker to the next. 

Opt-outs also need to be continuous as people search sites and data brokers are known to relist people’s information when they find more of it online, even if they previously opted out. 

DeleteMe automates data broker opt-outs.

When you enroll employees in a data broker removal service like DeleteMe, our privacy experts will remove your employees’ personal information from the most common exposure sources.

Trusted by 20% of the Fortune 500 and dozens of federal and state agencies, DeleteMe proactively removes employee personal data across hundreds of websites, keeping your organization safer from personalized social engineering threats. 

You Might Never Prevent 100% of Social Engineering In the Workplace

But with the advice above, you can make successful social engineering attempts a) extremely rare and b) limited in terms of potential impact. 

Take this 1,2,3,4 approach and see your social engineering risk drop dramatically. 

  1. Don’t rely on technical controls.
  2. Make social engineering training real and regular. 
  3. Build a security reporting culture. 
  4. Remove employee personal information from the web. 

Most successful social engineering attacks use personal data. 

And even the most generic social engineering campaigns require lists of employee email addresses or phone numbers – information that can be easily acquired from data brokers and people search sites. 

Remove employees’ personal data from data exposure sources like data brokers, and you will reduce the likelihood of social engineering in your workplace. 

]]>
What Will It Take to Reduce Social Engineering Risk In 2025? https://joindeleteme.com/business/blog/what-will-it-take-to-reduce-social-engineering-risk-in-2025/ Wed, 14 May 2025 16:07:41 +0000 https://joindeleteme.com/?post_type=b2b-post&p=17456

Table of Contents

Social engineering risk is growing so fast that nearly 1 in 2 organizations reported experiencing phishing and social engineering attacks last year. 

  • Social engineering is a manipulation technique criminals use to exploit human behavior to deceive individuals into revealing sensitive information, granting access to systems, or performing actions that compromise security.

Criminals invest in social engineering because it allows them to bypass technical security defenses and requires minimal resources compared to technical hacking, yet can produce massive results (e.g., access to corporate networks or financial accounts).   

However, social engineering campaigns still rely on criminals getting access to one core input – personal data. 

Unfortunately for anyone who is not a cybercriminal, the existence of data brokers (read more about these companies below) means that this dangerous social engineering data risk source is not difficult to find. 

Now, with access to large language models and artificial intelligence tools easier than ever, targeted social engineering attacks have never been simpler or less resource-intensive to carry out at scale.

DeleteMe has helped dozens of household name companies, public sector agencies, and high-risk individuals fight back against social engineering. 

Based on our experience and understanding of the current social engineering threat landscape, here’s what we’d recommend organizations do to reduce their social engineering risk in 2025. 

Effective Social Engineering Hinges on Personal Data

In a Reddit post titled “What is the best phishing email you have seen?” one of the most popular responses was:

“David has shared a folder with you.”

As the commenter later explained, the phishing email came from an attacker who “used the manager’s name to make the click happen.”   

Reddit post about effective phishing emails

Because the email included the target’s manager’s name (someone the employee knew in a work context), it was more believable. 

Lucky for the organization, the employee reported the email for investigation. 

That may not always be the case, as proven by another commenter in the Reddit thread, who said, “I did one [phishing email] that was a fake OneDrive email. I made it look like it came from a C-level whose last name was Martin, but I spelled it Martian. Got a bunch of people with that one.”

It’s not always email – social engineering by phone is also popular and can be made more effective with personal data. 

Reddit post about effective vishing

Social engineering attacks don’t strictly require personal data (beyond the targets’ email addresses/phone numbers/etc.), but as demonstrated by the above anecdotes, information on a social engineering target seriously increases the likelihood of success. 

That’s why many criminals do extensive research on their victims.

This process of finding out as much information as possible about social engineering targets is often called open-source intelligence (OSINT). Partially, OSINT means gathering intelligence from publicly available tools and sources. 

Criminals use data brokers as OSINT tools. 

We know from leaked criminal group chat logs that attackers use OSINT tools like data brokers to find social engineering targets and contacts to “name drop” within social engineering campaigns to make them look more believable. 

How Data Broker Information Fuels Social Engineering Attacks

Data brokers are companies that gather personal information about individuals from various sources, compile this information into comprehensive reports, and share or sell these reports to more or less anyone. 

There are two main types of data brokers:

  1. B2B data brokers. 
  2. People search sites. 

B2B data brokers publish people’s professional data, as well as information about organizations. For example, a person’s education and employment history, past and current roles, org charts, and more.

Data broker displaying an org chart

On the other hand, people search sites focus on people’s personal information. Things like their personal phone numbers, home addresses, family member names, links to personal social media profiles, etc. 

Data broker showing a person's family member information

Either one of these sources can give criminals a lot of information to work with when crafting social engineering campaigns. 

Combine the two, and you have a gold mine of data. 

AI Is Making Social Engineering Faster & Easier 

Artificial intelligence (AI) makes gathering information about social engineering targets even easier. 

In a report on generative AI in social engineering and phishing, researchers say that: 

“With its mastery of language and analytical abilities, Generative AI can scrutinize the digital footprints of targets. This provides insights into a target’s specific interests, affiliations, or behaviors.

In other words, AI can quickly pull relevant information about a person from hundreds of sources. 

In a recent Harvard study on large language models’ capability to launch fully automated spear phishing campaigns, AI models were able to collect accurate and useful data on people in 88% of cases. 

AI can also help write the actual content of the emails (or texts, social media messages, etc.) 

As per the above-mentioned report on generative AI in social engineering and phishing: 

“This gathered intelligence can subsequently be used to develop the attack strategy—referred to as pretexting. Pretexting is a broad stage that encapsulates the creation of a story, scenario, or identity that an attacker uses to engage with the target. […] This enables context-aware phishing, where AI crafts malicious content that resonates with the target’s communication patterns, making the story or scenario highly believable. This might include emails that sound like they’re from colleagues, friends, or familiar institutions.” 

It’s perhaps unsurprising that AI-generated phishing emails saw a 54% click-through rate in the Harvard study – the same as emails crafted by human experts and much higher than arbitrary phishing emails (which had a 12% click-through rate). 

Removing Employee Personal Information from Online Sources Can Significantly Reduce Social Engineering Risk

The most effective step any organization that wants to reduce its social engineering risk can take is to minimize the amount of personal information available about its employees online. 

This should encompass regularly auditing the organization’s public-facing information (including on corporate social media profiles and company websites) to identify and mitigate unnecessary exposure and provide training to employees on safe online behaviors. 

Data broker exposure should also be taken into account and dealt with. 

Though it’s possible to remove employees’ personal data from data brokers and people search sites manually, it’s a time-consuming process and one that needs to be repeated periodically as data brokers are known to republish information once they find more of it online, even if a person has previously “opted out.” 

A better solution is to enroll employees, starting with those most exposed to social engineering risk, into a continuous service that proactively removes their personal data across hundreds of websites. 

]]>
New Feature: Risk Scan – Identify the most at-risk employees in your organization https://joindeleteme.com/business/blog/feature-announcement-risk-scan-identify-the-most-at-risk-employees-in-your-organization/ Thu, 20 Mar 2025 17:04:50 +0000 https://joindeleteme.com/?post_type=b2b-post&p=16977 Our new Risk Scan feature, which you can find in your Admin Dashboard sidebar navigation, allows you to quickly scan for exposed PII associated with employees of your organization who are not yet on-boarded or covered by DeleteMe.

This completely free feature gives you an understanding of how exposed employees are prior to purchasing DeleteMe coverage for them. 

This feature allows admins to:

  • Gather quantitative data to help prove exposure and risk across their employee base
  • Advocate effectively for increased coverage with critical stakeholders and leadership

The results of each scan will give administrators visibility into total exposed PII per employee, their relative risk levels, and proactively suggest recommendations for appropriate DeleteMe coverage. Admins are also given a simple process to request and add additional seats/coverage based on these results.

A screenshot of Risk Scan results example

How to use the new Risk Scan feature

Administrators can upload data for up to 100 employees at a time by either CSV or manually adding details for each. The data required to scan for PII exposure is the same as required for our Instant Coverage feature, which are:

  • First and Last Name
  • Address
  • Phone Number
  • Email Address
  • Date of Birth (optional)
  • Title (optional)

You can easily manage and view historical scans in the Privacy Center. Admins can rename scan reports for easier reference, review or delete older scans if you no longer need them, and take action to notify your Customer Success Manager if you’d like to purchase additional coverage.

FAQs:

Do employees need to be onboarded DeleteMe members to run a risk scan?

No. Administrators can run scans for any individual in their organization and they are not required to be current members in the program.

Will uploading employees to Risk Scan onboard them into DeleteMe?

No. The employees you run through a Risk Scan will not automatically be onboarded into DeleteMe. Risk Scan only searches our covered websites on the public web for matches from the data you provide and gives you visibility into the total amount of exposed PII.

How long does a scan take?

Scan length depends on how many employees you’re scanning for at once. Scans can be as short as 5 minutes and as long as a few hours.

Does DeleteMe keep the personal information of employees we scan?

No. Administrators can select how long results and the uploaded personal data are stored for reference. We offer options from as short as one week to up to one year.

]]>
How to Build Cyber Security for Executive Protection Against Personal Threats https://joindeleteme.com/business/blog/how-to-build-cyber-security-for-executive-protection-against-personal-threats/ Wed, 12 Mar 2025 12:59:05 +0000 https://joindeleteme.com/?post_type=b2b-post&p=16997

Table of Contents

TL;DR: Security teams can build cost-effective cybersecurity programs around their executives’ personal accounts and privileges by removing their data from core exposure sources like data brokers.

  • DeleteMe delivers cybersecurity for executive protection by keeping personal data away from cybercriminals.

Even seemingly innocent information can be exploited by cybercriminals. Protect executives’ cybersecurity by keeping that information private.

Cybersecurity for Executive Protection Starts with Technical Controls

A program to protect executives from cyber threats can include security controls like:

  • Advanced threat detection controls: Software like antivirus (AV) and endpoint detection and response (EDR) that look for malware in real-time on an executive’s device or the software they use. 
  • Device and application access management: Access control here means making sure that only executives can access their accounts, devices, and applications. It is a fundamental part of cybersecurity for executive protection. Even though executives might never want their workflows interrupted, controls like multi-factor authentication, biometric security, and zero-trust access protocols can significantly reduce their cyber risks. 
  • Incident response & continuous improvement: Data breaches and other executive cyber security risks can never be 100% protected against. However, they can be contained with a well-practiced incident response plan that addresses breaches as they happen and learns from them to improve defenses. When an incident occurs, everyone should know who does what, when, and what happens next to stop incidents from spiraling. 

These are some of the processes and technologies on which cybersecurity for executive protection depends on.

The best executive cybersecurity programs are built on having layers of preventive and corrective controls to make incidents less likely and, if they do happen, less dangerous to business continuity and executive reputation. 

But Technical Controls Alone Will Not Protect Executives from Cyber Risk 

Many organizations deploy technical preventive controls but fail to secure executives’ personal information, which enables the majority of data breaches and phishing incidents. 

When executives’ personal information is exposed online, it can put them at risk of cyber threats like:

  • Identity theft & financial fraud: With enough personal details, attackers can impersonate executives to commit fraud, including making unauthorized bank transactions or opening fraudulent accounts.
  • Social engineering: Detailed personal data enables more convincing spear-phishing or whaling attacks, where emails or texts are tailored to trick executives into sharing sensitive information or installing malware on their devices.
  • Account takeover: Knowledge of personal details (e.g., birthdays, names of family members) can help attackers guess passwords, answer security questions, or bypass multi-factor authentication (MFA) through techniques like MFA fatigue or SIM swap attacks. For example, the cybercriminal group LAPSUS$ is known to contact help desk personnel at targeted organizations with the goal of convincing them to reset the credentials of privileged accounts. They often do so by providing answers to recovery questions like a mother’s maiden name or the first street they lived on.
  • Corporate espionage: Competitors or adversaries can use publicly available executive data to get executives to divulge corporate secrets or lure them into compromising situations.

Stopping Executive Information from Being Exposed Online

It’s not hard for cybercriminals to find executive personal information online. 

Executive information is often listed on employer websites and corporate social media pages. 

Executive management

Additionally, criminals can find executive data through public records, crowdfunding platforms (which often show up when you search for an executive’s name and which are great for giving criminals an idea of what causes an executive is interested in), online forums, personal social media pages, and public gift wish lists.

There are also data brokers and people search sites, which make executive data collection particularly easy. 

What are data brokers?

Data brokers (sometimes known as people search sites) are companies that collect, aggregate, and sell personal information about individuals, including executives and their family members. 

Anyone can use data brokers and people search sites to find information about executives, including their home addresses, phone numbers, email addresses, marital status, education and employment history, links to social media profiles, and more. 

B2B data broker profile

Besides “normal” people search sites and data brokers, there are also B2B data brokers, which can include org charts, affiliations and memberships, and employee quotes from press releases. 

How data brokers amplify cyber threats to executives

Data brokers aid cybercriminals targeting executives by:

  • Aggregating data: Instead of piecing together disparate bits of information from public records and social media, cybercriminals can access a comprehensive dossier from one source. Data broker information is often accessible through simple searches on Google and other search engines.
  • Enhancing social engineering: With a complete picture of an executive’s personal life, attackers can create highly personalized and convincing scams.
  • Escalating attack scale: Automation and AI make it easy for threat actors to exploit these dossiers on a large scale. 

Case study: How criminals exploit data brokers to target executives with cybercrime

Cybercriminal groups like Conti have dedicated open-source intelligence (OSINT) teams that harvest publicly available data on victim organizations. 

These teams collect information from a target’s official website and other online sources. In internal Conti communications, there are references to data broker databases that provide valuable details, including names and contact information of high-profile individuals. 

Cybercriminals leverage this information to identify targets for spear-phishing campaigns and to improve the credibility of their social engineering attacks by “name-dropping” verified contacts. 

Deploying an Executive Personal Data Protection Solution 

To reduce the risk of cyber attacks targeting executives, remove their information from public sources. 

Removing executive data from the web involves auditing internet sources like company sites, social media pages, and data brokers for personal executive information and, where possible, removing it. 

Because data brokers are known to republish information when they collect more information, opt-outs from these companies need to happen continuously. 

Specialized data broker removal services can be used to keep your organization’s executives’ personal information out of reach of criminals.

]]>
Creating an Executive Cyber Security Training Program That Works Against Personalized Threats https://joindeleteme.com/business/blog/creating-an-executive-cyber-security-training-program-that-works-against-personalized-threats/ Wed, 12 Mar 2025 12:54:22 +0000 https://joindeleteme.com/?post_type=b2b-post&p=16989

Table of Contents

TL;DR: Executive cyber security training helps defend executives against some cyber risks. However, training must be augmented with data removal to protect executives from personalized threats. 

  • Personalized threats are threats that target executives specifically by using detailed information about them. 

Personalized threats defeat executive cyber security training because they use personal data (such as professional roles, contact details, and even family member information) to create highly convincing and contextually relevant attacks. 

  • DeleteMe supplements executive cyber security training by keeping personal data away from cybercriminals.

AI advancements make it easier for criminals to build sophisticated attack chains. 

Financial Times' headline: "AI-generated phishing scams target corporate executives."

Companies that protect their executives against AI attacks will increasingly depend on combining cybersecurity training with personal data protection solutions. 

Combining training with personal data protection covers two bases, making it less likely an executive will engage with a personalized cyber threat and making it harder for someone to create that threat in the first place.

Executive Cyber Security Training Has Never Been More Crucial 

Executives are attackers’ prime targets. 

According to one recent study:

  • 72% of US senior executives were targeted at least once by a cyberattack in the last 18 months.
  • 69% of respondents whose company’s senior executives were previously targeted say cyberattacks against senior staff have increased. 

Criminals target executives because:

  • Executives often have direct access (as administrator) or control over people with access to their organization’s most valuable assets. Bank accounts, customer data, the ability to approve large transactions, etc. 
  • They tend to be “public personas,” i.e., their personal information is usually available on social media, company websites, and professional networks. Attackers can use this data in their attacks, for example, to create personalized phishing attacks that are hard to detect. 
  • People with packed schedules (like executives) are more likely to make quick decisions or overlook subtle red flags (e.g., in social engineering attacks). It might even make them skip cyber training. In the study mentioned above, among the companies who said they don’t prioritize extra cyber training for senior executives, 34% said this was due to resistance from senior leadership to participate in training due to time constraints. 

How to Train Executives to Withstand Personalized Threats

How do organizations protect their executives against modern threats? We see secure organizations build training programs with three core training content and process inputs.  

1. Put a spotlight on social engineering 

Attackers often target people through the same hooks that sales might use, e.g., needs, desires, wants, and (most of all) points of contact they already know. 

Social engineering attacks, such as phishing, spear phishing, and pretexting, all rely on hooking a target executive with a point of relevance, like a hobby they have, the school they went to, their job responsibilities, etc., that will get them to put their guard down and engage with what is really a scam.

LinkedIn post about social engineering

A real-world example of a personalized spear phishing campaign targeting executives is the attack on the French cinema group Pathé. 

Here, cybercriminals sent Pathé Nederland director an email that looked like it came from the chief executive of the French parent company, falsely claiming a need for urgent funds related to a business acquisition. 

Despite a few inconsistencies, the email successfully manipulated internal communications, leading to multiple fraudulent transfers before the deception was uncovered.

The above campaign – and many executive social engineering attacks – is only made possible when an attacker can get an executive’s email address alongside their job description and company hierarchy.

2. Have a game plan for personalized threat campaigns

Executive social engineering campaigns, like the one in the previous point, rely on attackers being able to find executives’ personal information, like their email addresses, roles, and communication styles. 

The result is highly personalized messages that increase the likelihood of the executive engaging with the phishing attempt. 

All the security awareness training in the world won’t work against an attacker who really “knows” their target. Just like a great cold sales pitch, a personalized phishing attack will break down someone’s guard just enough to get them to engage and move deeper into the scam. 

What’s worse is that security awareness training often focuses on generic tactics and common phishing scenarios. Generic training is seen by executives as something they must do to “tick a box” alongside 100 other priorities.  

Personalized social engineering attacks (designed with detailed knowledge of the target’s personal and professional life) are not picked up. 

3. Work backward from the methods criminals use to target executives

Criminals can use a variety of methods to gather executives’ personal information for personalized attacks. 

Common techniques include:

  • Open Source Intelligence (OSINT)
    Publicly available information from company websites, press releases, annual reports, and professional profiles (e.g., LinkedIn) can reveal executive details and email formats.
  • Social media
    Social platforms such as Twitter, Facebook, and LinkedIn often provide clues about an executive’s contact information and professional networks.
  • Domain and email reconnaissance
    Tools that analyze domain registrations or company email structures (e.g., common email patterns like firstname.lastname@company.com) allow criminals to figure out potential email addresses.
  • Data breaches
    When companies or third parties experience breaches, leaked databases may include executive contact details that criminals can use for targeted attacks.
  • Data brokers
    People search sites and B2B data brokers share executives’ personal information (including their email address, phone number, family member details, education history, etc.) in one place. 

When it comes to data brokers, we know from leaked criminal chat transcripts that attackers buy executive data from data brokers.

It’s easy to see why. Data brokers are fantastic tools for determining spear-phishing targets and finding contacts to “name drop” in social engineering attacks. 

Here’s an example of an executive’s profile on a B2B data broker website:

B2B data broker profile

Many B2B data brokers also include org business charts. 

Criminals leverage org charts to further personalize their attacks by: 

  • Identifying key decision-makers
    By knowing who holds power within the organization, attackers can target individuals with the authority to make critical decisions or access sensitive data.
  • Tailoring the message
    With details on roles and relationships, criminals can craft messages that appear to come from a trusted colleague, superior, or even a subordinate. 
  • Simulating internal communication
    Knowledge of departmental structures could allow attackers to mimic the language, tone, and content of legitimate internal communications, making their phishing emails or other social engineering attempts more convincing.
B2B data broker profile org chart

Attackers also love to use people search sites (“regular” data brokers). 

These sites include personal details like potential family members, marital status, and social media links – data that criminals can further make use of in their campaigns. 

AI Has Made It Easier to Launch Cyber Attacks Against Executives 

AI tools are making it faster and easier for criminals to gather information about executives by automating high-risk spear phishing operations, including data collection. 

A recent Harvard study found that AI successfully gathered precise and valuable information in 88% of cases. AI can also analyze large amounts of data on an executive’s tone and style, making it easier to create persuasive scams.

As per the Harvard study:

“AI enables attackers to target more individuals at lower cost and increase profitability by up to 50 times for larger audiences.”

Secure Companies Build Training Programs with These 2 Inputs 

In the next five years, companies that combine personalized executive cybersecurity training with active data removal solutions will stop far more attacks against executives than their peers. 

1. Up-to-date cyber security training content

Training programs should address the risks of personalized social engineering and spear phishing and include real-world examples and simulations demonstrating how attackers could leverage executives’ personal data.

Ensure training materials are frequently updated to reflect evolving tactics (particularly important with advancements in AI and data mining techniques).

2. Executive data footprint management 

Educate executives on the importance of reducing how much personal and professional information they share online. 

Advise executives on the benefits of regularly reviewing and cleaning up their online profiles. This includes removing unnecessary personal details that attackers could exploit.

Work with cybersecurity professionals like DeleteMe to identify and remove personal and professional information from data broker sites. This reduces the chance of attackers accessing personal details about executives.

Boost Your Executive Cyber Security Training Program with Personalized Data Removal

By combining specialized training with proactive steps to reduce digital exposure, organizations can better protect their executives from sophisticated, personalized cyber threats.

]]>
Implementing Digital Executive Protection Programs https://joindeleteme.com/business/blog/implementing-digital-executive-protection-programs/ Wed, 12 Mar 2025 12:46:19 +0000 https://joindeleteme.com/?post_type=b2b-post&p=16986

Table of Contents

Digital executive protection is how security-minded companies protect high-profile employees by making it harder for criminals to find their personal information online. 

  • According to DeleteMe’s research, executives have 30% more personal information exposed online compared to the average employee or consumer. 

Executives’ online presence can significantly influence their vulnerability to risks ranging from social engineering to harassment. 

  • DeleteMe delivers digital executive protection by keeping valuable personal information away from criminals.

Even seemingly innocent executive personal information can be used against them and their companies by criminals. That’s why digital executive protection hinges on being able to keep personal information private.

What Should Be Included In a Digital Executive Protection Program? 

Digital executive protection is how security and HR teams protect executives, their online assets, and personal data from online and offline threats. 

In practice, digital executive protection can mean using various different tools and tactics, including:

  • Risk assessment and profiling. Identifying weaknesses in the executive’s digital footprint, including social media, email accounts, and other online presence. 
  • Cyber hygiene and training. Offering education on cybersecurity best practices, such as strong password creation, recognizing phishing attempts, and secure communication methods. Regular security awareness training helps keep the executive team updated on emerging digital threats and best practices.
  • Secure communication. Ensuring encrypted communication channels for sensitive discussions and transactions. 
  • Device security. Installing and maintaining security software on all executive devices, including antivirus programs, firewalls, and VPNs. 
  • Secure networks. Setting up secure Wi-Fi networks and using VPNs to protect data transmitted over the internet. 
  • Continuous monitoring. Using tools to monitor the executive’s digital footprint in real-time for any signs of compromise or emerging threats. 
  • Incident response. Having a plan in place to quickly respond to and mitigate any security incidents impacting the executive and the organization they work for, including notifying relevant parties, containing the breach, and initiating recovery procedures. 
  • Data minimization. Reducing the amount of personal information available online about executives by removing unnecessary data from sources like corporate websites and social media and ensuring privacy settings are properly configured.
  • Reputation management. Monitoring and managing online mentions of executives and potentially harmful content to protect their reputation. 
  • Access control. Ensuring that physical access to executive devices and networks is restricted and monitored.

However, in our experience providing digital executive protection for over a decade, the single most effective step organizations can take to protect their executives is to reduce their digital information footprint. 

An executive’s digital information footprint is all the information about them that is findable online. Information like an executive’s personal phone number, email address, home address, social media accounts, family details, hobbies, and more all make up their digital footprint.

An executives digital footprint increases their risk of being successfully targeted by threats.

What Threats Do Digital Executive Protection Programs Help Prevent?

Attackers target executives through their digital footprint through:   

  • Phishing: Using an executive’s email address or phone number. 
  • Spear phishing: Turning various personal details about an executive, such as their job title, recent projects, and professional relationships, into highly targeted messages (“spear phishing”) that seem legit.
  • Harassment and doxxing: Targeting someone either through their contact details (e.g., their home phone number) or using personal information, like family member names or their employment history, to intimidate or embarrass them.
  • Swatting: Swatting involves sending emergency services to an address under false pretenses (which can have serious safety implications). If a criminal knows an executive’s home address, they can potentially swat them.
  • Credential stuffing: Figuring out passwords using an executive’s personal details like their date of birth, family member information, etc. 

Tracing an executive’s digital footprint online also gives attackers a way to socially engineer executives. Information such as business affiliations and social interactions can all be used to manipulate or deceive the executive, their colleagues, and family members. 

Digital executive protection depends on reducing their digital footprint by cutting back the personal information available online. 

How Data Brokers Drive Executive Risk

Corporate websites, social media profiles, public records, press releases… These are all open-source intelligence (OSINT) sources that attackers can use to find executive personal data. 

Executive management team

Data brokers are companies that find and collect this information to create profiles about executives using public records, online activity, and commercial data. 

The executive profiles available on data brokers include names, addresses, phone numbers, birth dates, family connections, property ownership, income estimates, social media handles, and more​. 

A whole sub-industry of data brokers, “B2B data brokers”, exists specifically to list executive profiles, including details like education and employment history, professional connections, org charts, and more. 

B2B data broker

The danger of leaving executives’ data unchecked on data brokers is the ease with which attackers can obtain their personal and professional data. 

Anyone, including cybercriminals, activists, and stalkers, can purchase an executive’s personal information on a data broker or use people search sites to find their home address, private phone numbers, email accounts, and names of relatives. 

An attacker looking to dox or scam an executive will often start by pulling their target information from data broker websites. 

Digital executive protection means opting executives out of these data brokers. It removes a readily accessible source of personal data that attackers would otherwise exploit​. 

Digital Footprint Reduction for Executive Protection

Tell data brokers to stop listing and selling your executive’s personal information, and you will make it harder for attackers to find and target them, too.

Since we began removing executive information from data brokers in 2010, we’ve refined a tested method of executive personal information removal designed to tackle the core exposure risks first.

The first place to start is with a subcategory of data brokers called “people search sites.”

People search sites are some of the most dangerous sources of executive personal information exposure. They tend to expose executives’ personal information, including contact details and family member details, to anyone with an internet connection. 

Every people search site (just like other data brokers) has its own opt-out process, but they usually follow a similar pattern. To opt out, the person who wants their information removed typically needs to:

  1. Find their listing: Search the data broker’s site for your name (and location) to see if it has a profile on you. If the broker in question doesn’t have a public search feature, try searching for your name on a search engine like Google, along with the broker’s name. 
  2. Find the opt-out page: Most data brokers have a privacy opt-out page. Look for a “Do Not Sell My Personal Information” or “Privacy” link in the footer of their homepage. Alternatively, you can search for “[Broker Name] opt-out.”
  3. Submit an opt-out request: Follow the opt-out instructions on the broker’s opt-out page to request the deletion of your personal data. You might need to confirm your request via email. Some data brokers may also request additional verification, like text verification or a scan of your ID.

Anyone who wants to take their information down from data brokers needs to repeat the above process for all people search sites and data brokers that have a profile on them.  

It’s important to note that even after successful opt-outs, executives’ data may be republished by these sites when they update their databases. Executive data is valuable for data brokers; they can collate and sell it for a premium to a host of customers, including advertising agencies, lead generation firms, sales companies, and more.

That’s why ongoing monitoring is key. Executives must check back periodically to ensure their name hasn’t been re-added. If their profile has returned, they need to submit a new opt-out request.

Digital Executive Protection with Automated Data Broker Opt-Outs

Personal data removal services like DeleteMe automate the manual work of opting out. 

DeleteMe continually checks data broker sites and submits opt-out requests on executives’ behalf, ensuring that their personal information is not easily accessible to just anyone.

]]>
The 2025 State of Judiciary Personal Data Exposure https://joindeleteme.com/business/blog/the-2025-state-of-judiciary-personal-data-exposure/ Thu, 06 Mar 2025 13:53:24 +0000 https://joindeleteme.com/?post_type=b2b-post&p=16950 Judges and court officials across the United States have faced a sharp rise in threats and harassment in recent years, often fueled by the easy availability of their personal information on people-search and data broker websites. Home addresses, phone numbers, and other personal data are frequently exposed online for a few dollars or even for free, making it easier for disgruntled litigants, extremists, or stalkers to locate and target judges. Over the past five years, this problem has escalated: the number of threatening or harassing communications directed at federal judges and court personnel has surged to unprecedented levels.1 This report examines the trends in judges’ data exposure and related threats since roughly 2019, highlights notable harassment incidents across federal, state, and local levels, and reviews the legal and policy responses aimed at protecting judges’ personal information.

Growing Threats and Data Exposure (Trend Overview)

Escalating Threats: Threats and harassment against judges have increased dramatically over the past five years. According to U.S. Marshals Service data, “serious threats” (those triggering investigations) against federal judges more than doubled from 224 in fiscal year 2021 to 457 in 2023.2 This marks a steep rise from 2019, when 179 such threats were recorded.3 In total, federal authorities documented nearly 27,000 threatening or harassing communications targeting court officials from 2015 through 2022 – an unprecedented volume in modern history. Similar patterns are reported at the state level. For example, Wisconsin’s court system recorded 142 threats against state judges in a single year, prompting lawmakers to consider new protections. Arizona’s Maricopa County (home to Phoenix) logged over 400 threats or harassing incidents aimed at judges, staff, and courts from 2020 to 2023, after noticing a spike around the 2020 election. These figures underscore that judges at all levels – federal, state, and local – are facing a more hostile environment than in the past.

Role of Data Brokers: A key factor in this trend is the widespread exposure of judges’ personal data on the public web, especially via data broker and people-search sites. Judges’ home addresses, phone numbers, and family details often appear in online databases that anyone can access for a nominal fee.4 This fuels “doxing” – the malicious publication of personal info – which has become disturbingly common. Privacy experts warn that online threats and harassing messages against judges have more than doubled in the past few years, driven in part by how easily assailants can obtain judges’ private information online.5 In many cases, those with violent intent have simply bought judges’ home address or phone number from a data broker and then used that information to menace or harm them. The trend over time is clear: as more personal data has become available on the internet, incidents of doxing, threats, and harassment targeting the judiciary have grown in frequency and severity.

Findings from DeleteMe Scan Data

These trends are consistent with findings from our DeleteMe’s data when comparing the amount of instances of exposed PII detected on average per employee. After new members onboard and are scanned for the first time, the judicial sector has more than 550 exposed data points per employee – nearly 45% more exposure than the average across all industries. (Figure 1)

This fact, combined with both increasing rates of personal data exposure over the years (Figure 2) and increasing types of personal data being correlated back to a single personal profile (Figure 3), are evidence of why threats are increasingly utilizing this easily accessible, low hanging fruit in targeted attacks. 

Notable Incidents of Doxing, Threats, and Harassment

Over the last five years, numerous incidents in the news have highlighted the dangers posed when judges’ personal information is exposed online. These cases span federal, state, and local jurisdictions:

  • Attack on Judge Esther Salas (2020): In a tragic high-profile case, a disgruntled former litigant targeted U.S. District Judge Esther Salas at her New Jersey home. The attacker, a self-described “anti-feminist” lawyer, found Judge Salas’s home address on the internet (purchasing it from a people-search site) and showed up at her door posing as a delivery driver.6 He opened fire, killing Judge Salas’s 20-year-old son and wounding her husband. This attack – facilitated by the assailant’s ability to obtain the judge’s personal data online – was a wake-up call for the country. It vividly illustrated how exposed personal information can lead to real-world violence.
  • Doxing of U.S. Supreme Court Justices (2022): Even the nation’s highest judges have been targeted. After the Supreme Court’s decision overturning Roe v. Wade in 2022, hacktivists “doxxed” five Supreme Court justices by posting a trove of their personal details online.7 The leaked information included the justices’ home addresses (and even some financial details), an apparent attempt to invite harassment or intimidation. This privacy breach against the Justices – reportedly in retaliation for the controversial ruling – underscored that no judge is immune from doxing when personal data is readily available.
  • Harassment of Judges in Political Cases (2020–2023): Judges presiding over politically charged cases have faced unprecedented waves of threats, often abetted by online exposure of their data. For instance, U.S. District Judge Royce Lamberth in Washington, D.C., who handled high-profile January 6th Capitol riot cases, endured a barrage of death threats. One man obtained Judge Lamberth’s home phone number (likely via online lookup) and repeatedly called his residence with graphic threats to murder him.1 Right-wing websites and forums also circulated judges’ personal details alongside calls for violence in retaliation for rulings perceived as unfavorable. Judges and prosecutors involved in cases related to former President Trump have been especially targeted – their home addresses and contact info have surfaced on extremist message boards, followed by spikes in threats and hostile messages. In one example, a state court judge in New York handling a Trump-related trial had his address and family details posted on social media, with users urging protests at his home (). These coordinated harassment campaigns are often enabled by data broker sites that aggregate and publish personal information.
  • State and Local Judges Targeted: Doxing and violence have also struck judges at the state and local levels. In October 2023, Maryland Circuit Court Judge Andrew Wilkinson was shot and killed outside his home by a man upset over a custody ruling. Investigators revealed the suspect found the judge’s home address online months earlier – the judge’s wife stated that “the internet led him to Drew [Judge Wilkinson]”.12 Similarly, in June 2022, a retired county judge in Wisconsin, John Roemer, was ambushed and killed in his home by a gunman carrying a “hit list” of public officials – an attack that authorities believe was premeditated and facilitated by the judge’s publicly available address. These tragedies, while extreme, highlight the real danger when personal data (like home addresses) of judges is accessible to would-be attackers. Even when physical harm is not carried out, the threat of violence is often very real: judges across the country report that online abuse and doxing incidents have made them fear for their families’ safety, and some have taken to carrying firearms for protection.1 In short, public exposure of judges’ personal data has directly contributed to intimidation, threats, and even lethal attacks in multiple recent cases.

Legal and Policy Responses to Protect Judges’ Personal Data

In response to these mounting threats, lawmakers and judicial authorities have pursued a range of legal and policy measures to protect judges’ personal information and improve their security. Below is a summary of key actions and proposals (with links to relevant legislation and policies):

  • Federal Judicial Privacy Law (Daniel Anderl Act, 2022): In December 2022, Congress enacted the Daniel Anderl Judicial Security and Privacy Act (named after Judge Salas’s late son). This law prohibits data brokers from selling federal judges’ personally identifiable information and allows judges to have personal info removed or redacted from public databases. Specifically, it protects judges’ personal data from resale by data brokers and permits federal judges to redact home addresses and contact info displayed on government websites. It also bars other businesses or individuals from publishing judges’ personal details if there’s no legitimate public interest. The law had broad bipartisan support and was a direct response to the Salas tragedy.
  • New Jersey’s “Daniel’s Law” (2020): At the state level, New Jersey moved quickly after the Salas attack to pass Daniel’s Law, which shields the personal information of judges, prosecutors, and law enforcement officers. This state law forbids the disclosure of the home addresses of active or retired judges (and immediate family) in public records, and it allows those officials to request the removal of their home address from any internet site or database.8 Daniel’s Law was one of the first and strongest state responses, setting a model for others by exempting judges’ home info from public records laws.
  • New State Legislation Protecting Judges’ Data: In the past few years, many states have introduced or enacted laws to curb public exposure of judges’ personal data. As of 2024, at least 10 states have passed new judicial privacy measures, and over 60 bills have been considered across 21 states.9 For example, Missouri’s Judicial Privacy Act (2022) covers not only state judges but also federal judges and prosecutors, allowing them to keep personal information (like home addresses) confidential. Florida expanded its existing judge-address confidentiality statute to include court staff and clerks’ employees as well. In Georgia, lawmakers unanimously approved a bill enabling any public employee (including judges at federal, state, or local levels) to request that their home address and phone number be removed from county property records posted online.8 Likewise, states such as Delaware, Nebraska, Idaho, and others have updated laws to prevent government agencies from releasing judges’ personal info in public documents or databases. These measures aim to close the data exposure loopholes at the state and local record level.
  • Penalizing or Preventing Doxing of Judges: Some jurisdictions are adding teeth to anti-doxing protections. In Maryland, after Judge Wilkinson’s murder, a bill was proposed to punish the malicious posting of a judge’s personal information when it is intended to facilitate harm.10 Similarly, a bill in Minnesota to prohibit “doxing” of judges cleared a legislative panel in 2023, reflecting growing interest in making the online targeting of judges a specific offense.11 These efforts seek to deter would-be harassers by making it illegal to knowingly expose judges’ home addresses or other private details in a threatening context.
  • Judiciary Security Programs: The federal judiciary and law enforcement have also taken internal steps to protect judges. The U.S. Marshals Service’s Judicial Security Division, which guards federal judges, has expanded monitoring and threat response capabilities in light of rising incidents.3 Judges are now offered enhanced home security systems and training. Notably, the Administrative Office of U.S. Courts provides federal judges with an online privacy service (such as subscriptions to “DeleteMe”) to pro-actively remove judges’ personal information from data broker websites. This service continuously scans and deletes judges’ addresses and phone numbers from people-search sites. Such measures have become a standard “perk” of the job in recent years, reflecting the new reality that scrubbing personal data from the internet is essential for judges’ safety. At the state level, court offices are increasingly advising judges on how to opt out of online databases, and some sheriff’s departments now conduct extra patrols or “swatting” prevention for judges who have been doxxed.1
  • Federal Regulatory Action on Data Brokers: Beyond judicial-specific laws, broader privacy reforms are being considered. In late 2024, the Consumer Financial Protection Bureau (CFPB) proposed a new rule to regulate data brokers and curb the sale of sensitive personal data that could endanger individuals like judges and law enforcement. The CFPB cited the risk of violence and stalking posed by data broker practices, explicitly referencing the 2020 incident where a judge’s son was murdered by an assailant who purchased the judge’s address from a data broker. The proposed rule would bring data brokers under stricter federal oversight (treating them like consumer reporting agencies) and ban the sale of personal identifiers – such as addresses and phone numbers – without strict safeguards.5 This regulatory push, alongside an FTC warning to data sellers after the Roe v. Wade ruling, shows a growing federal resolve to clamp down on the data broker industry in the interest of public safety.

Each of these responses – from new laws and regulations to court-led security programs – seeks to reduce the exposure of judges’ personal data and thus diminish the opportunities for criminals or harassers to target them. In summary, the past five years have seen a concerted effort to address the nexus between data exposure and threats to the judiciary, through enhanced privacy protections, anti-doxing measures, and improved security for judges at all levels. Legislators and court officials alike recognize that protecting judges’ personal information is vital to safeguarding the justice system and ensuring judges can do their jobs without fear of violent reprisal.

References:

  1. Reuters: Judges in Trump-related cases face unprecedented wave of threats
  2. Reuters: Exclusive: Threats to US federal judges double since 2021, driven by politics
  3. The Florida Bar: Federal judges grapple with escalating threats – The Florida Bar
  4. Lawfare Media: Data Brokers and Threats to Government Employees
  5. CFPB: CFPB Proposes Rule to Stop Data Brokers from Selling Sensitive Personal Data to Scammers, Stalkers, and Spies
  6. USCourts.gov: Congress Passes the Daniel Anderl Judicial Security and Privacy Act
  7. The Register: Supremes ‘doxxed’ after overturning Roe v Wade
  8. WABE.org: Georgia, other states shield addresses of judges, workers after threats
  9. Judicature: States Move to Protect Judges’ Safety
  10. WMAR2 News: Judge’s murder prompts considered bill to conceal personal information
  11. Session Daily: Anti-doxing bill to protect judges clears judiciary panel
  12. CBS News: Judge shot, killed in Wisconsin home by gunman with alleged hit list
]]>
How AI Will Affect Privacy in 2025 https://joindeleteme.com/business/blog/how-ai-will-affect-privacy-in-2025/ Thu, 27 Feb 2025 16:28:37 +0000 https://joindeleteme.com/?post_type=b2b-post&p=16648 In our recent blog post series for Data Privacy Day, we’ve covered such topics as the types of privacy services and the types of data brokers. Now we want to talk about our predictions for how AI will affect the privacy and safety of your personal data in the coming year. 

AI-driven cybercrime will grow significantly

Since the advent of ChatGPT brought AI into the mainstream, AI-driven fraud has become top of mind for many companies and consumers. A recent article from ZDnet showed that many experts believe the AI cybercrime wave has only just begun. 

Whereas once phishing emails were more easily identified by spelling mistakes or poor grammar, ChatGPT is now capable of generating error-free copy in seconds. As a result, cybercriminals are automating the entire phishing process and sending thousands of messages around the clock. 

Deepfakes have produced a similar issue. While many experts worried about image-based deepfakes in the early days of the AI boom, a more pressing concern may be voice deepfakes. 

In late 2023, an IT company fell victim to a deepfake copying the voice of one of its employees. The resulting breach hit dozens of the company’s cloud customers. Early last year, another deepfake scam resulted in a $25 million payout to fraudsters. 

In the absence of comprehensive federal guidelines, this crime wave will no doubt continue to grow. 

AI agents will share data on your behalf, with disastrous consequences

Agentic AI refers to tools that can act on your behalf, performing tasks such as booking flights, managing accounts, or even negotiating contracts. For example, ChatGPT’s new Operator tool can search the web and finalize purchases or travel arrangements. While these features promise convenience, they also introduce significant risks – both for individual consumers and for businesses.

For consumers, using AI tools like Operator often involves granting access to sensitive personal information, including financial credentials, travel preferences, and other private details. These tools need to accept privacy policies and process transactions autonomously, which can result in inadvertent data sharing or misuse. A poorly designed or misused AI agent could expose users to financial fraud or privacy breaches, even with safeguards in place.

For businesses, the stakes are even higher. Agentic AI tools embedded in enterprise software could handle sensitive corporate data, including employee credentials, customer information, and financial records. If these tools lack sufficient oversight, they may inadvertently expose proprietary information or create vulnerabilities that cybercriminals could exploit. 

According to Gartner, by 2028, 33 percent of enterprise software will incorporate agentic AI. This widespread adoption could amplify risks in organizational cybersecurity and employee privacy, particularly if companies do not prioritize human oversight and robust policies.

The integration of agentic AI into everyday workflows means both personal privacy and enterprise security are at risk. Businesses and individuals must remain vigilant and establish clear boundaries around what these tools are allowed to access and automate.

Employees sharing data with AI tools will significantly increase business risk

Even apart from AI agents, employees are already sharing too much information with AI tools. For companies that invest in the enterprise version of ChatGPT, the situation is somewhat safer. By default, ChatGPT will not train based on their information – meaning it won’t process user input to learn and improve its responses. However, smaller businesses that have individual accounts for employees should know that the AI will train on everything employees input unless a particular setting is changed. 

If that training data becomes available through a hack or breach or even through a cybercriminal simply chatting with the AI tool after stealing access credentials, companies shouldn’t be surprised if their proprietary information becomes widely available for fraudsters, scammers, and ransomware gangs. 

This year, companies will have to create organizational policies that cover what employees can share with AI tools. However, if the past is anything to go by, such policies will lag far behind the actual risks, and even large companies will not be immune to AI oversharing this year. 

AI tools from Big Tech will train on your personal data

One thing that has become very clear over the past few years is that new AI tools like Claude, Google’s Gemini, and ChatGPT are data-hungry. They scrape the internet for data, and while many providers may attempt to prevent personal information from being caught up in training data, they have not always been successful. 

Experts have exposed Gemini for training on personal data, and Elon Musk’s X has instated policies that allow it to train AI models freely on user information. Unfortunately, these types of incidents demonstrate what consumers should already know: Big Tech will not respect your personal information in the coming year, and there’s a high chance these companies will sell your data to data brokers or use it for training AI tools. 

For consumers, the best option at the moment is to opt out of sharing personal information. Be aware of the privacy policies for companies like X, and if your data is freely available online, opt out or use a service like DeleteMe to protect your information from getting caught up in web scraping. 

AI regulations will face serious challenges under the current administration

Trump has already repealed Biden’s executive order addressing AI risks, and we’re back to a Wild West of data privacy when it comes to these technologies. And it seems unlikely that this year will bring a regulation that sufficiently addresses the problem. 

In fact, with the current administration’s tech and business-friendly policies often aiming to reduce regulatory oversight and red tape, the problem will very likely worsen throughout the next few years. 

Consumers must protect themselves

AI comes with a lot of risks for data privacy, but the benefits in efficiency and productivity that AI brings to businesses mean that its adoption at this point is inevitable. Unfortunately, the government and Big Tech have little protection to offer. This means it is up to businesses and consumers alone to protect themselves from AI risks this Data Privacy Day and throughout the coming year. 

There are some services that allow you to opt out of AI tools using your data for training or remove your information from data brokers so it doesn’t get swept into web scraping for AI tools. And as long as companies stay aware of AI risks and monitor carefully, they can prevent employees from accidentally exposing too much data. For now, this seems to be the most we can hope for when it comes to AI and data privacy.

]]>
DeleteMe’s 2025 Privacy Predictions https://joindeleteme.com/business/blog/deletemes-2025-privacy-predictions/ Tue, 28 Jan 2025 16:13:04 +0000 https://joindeleteme.com/?post_type=b2b-post&p=16644 With Data Privacy Week in full swing, it’s the perfect time to discuss what privacy trends we expect to see in 2025. Accordingly, here are the five key privacy trends that will impact consumers and companies this year. 

Prediction #1 – AI will drastically increase business cybersecurity risk

In our last blog post, we talked about how AI will affect data privacy. You can read the post here for more information, but it’s worthwhile mentioning again. In 2025, businesses are already grappling with increasingly sophisticated attacks, and AI will tip the scales further in favor of cybercriminals.

AI enables faster, more complex cyberattacks. Malicious actors can now automate phishing attempts, making them highly personalized based on publicly available information, and also nearly indistinguishable from legitimate communications. This drastically increases the success rate of attacks, putting businesses of all sizes at greater risk.

Another concern is the misuse of generative AI deepfakes. Going forward, cybercriminals will increasingly use deepfakes to impersonate executives or employees, creating convincing fake communications to steal sensitive data or authorize fraudulent transactions.

Organizations will also face greater risks from within. As employees adopt AI tools to enhance productivity, they may unknowingly expose proprietary data. A single misstep, like entering confidential information into an unsecured AI tool, can have wide-reaching consequences. 

Prediction #2 – State-sponsored cyberattacks will be a top driver of breaches

Cyberattacks driven by foreign governments are nothing new, but in 2025, they will reach unprecedented levels. As global political tensions escalate, businesses will face increasing threats from state-sponsored hackers targeting critical infrastructure.

One factor that will exacerbate this trend is tariffs instituted under the Trump administration. These tariffs may motivate adversarial nations to retaliate through cyberspace.

Already, state-sponsored groups are shifting their focus to businesses and institutions in the United States, aiming to disrupt operations and access sensitive information. In November of 2024, T-Mobile suffered a massive data breach allegedly carried out by Chinese state-sponsored hackers

State-sponsored cyberattacks are often highly coordinated and well-funded, but not always. Sometimes they are standard phishing attacks that target both large corporations and smaller suppliers, which often serve as an entry point to broader supply chains.

The risks are clear: state-sponsored cyberattacks are no longer rare or limited to government agencies. Businesses must prepare for a future where breaches driven by geopolitical tensions become the norm. 

Prediction #3 – The FTC will see its powers reduced.

In 2025, the Federal Trade Commission (FTC) will face increasing pressure to scale back its regulatory and enforcement powers. Under the new administration, the FTC’s ability to address antitrust issues and protect consumers from privacy violations will likely be curtailed in an effort to reduce red tape and simplify regulatory compliance for companies. This could also result in fewer investigations and weaker penalties for corporate missteps, which may sound positive to some people, but will actually leave companies at greater risk. 

Under a weakened FTC, social media platforms and other tech giants stand to gain significantly. With reduced scrutiny, they could expand their data collection and advertising practices that exploit consumer information. Meanwhile, smaller businesses may find it harder to compete without robust antitrust enforcement.

In 2025, the FTC’s reduced powers could lead to lasting consequences for privacy and consumer rights. It will be up to individuals, data privacy services, and advocacy groups to fill the gap left by waning federal oversight.

Prediction #4 – The alliance between Big Tech and the current administration will worsen consumer data privacy

Even beyond reducing the FTC’s powers, the new administration’s cozy relationship with Big Tech will erode consumer data privacy in 2025. By stripping away federal protections and regulatory oversight, the government will open the door for tech giants to prioritize profits over user safety even more than they already do.

One of the most telling moves is Trump’s repeal of President Biden’s AI executive order that addressed AI risks like discrimination and national security threats. Without federal safeguards, the rapid growth of AI will expose consumers to unchecked data collection, algorithmic bias, and security vulnerabilities.

Furthermore, a weakened regulatory environment means Big Tech will face fewer restrictions on how they collect, use, and share data. Companies will have the freedom to expand invasive practices like behavioral tracking and profiling, often without clear consent. Consumers, meanwhile, will have fewer options to push back or demand accountability.

For example, with looser oversight, we can expect an increase in hidden terms buried in user agreements, making it harder for consumers to opt out of invasive practices. At the same time, data breaches and misuse will become harder to detect, as weakened federal enforcement limits transparency.

Prediction #5 – States will continue to step up in the absence of a federal privacy law 

As federal lawmakers remain gridlocked on privacy legislation, states will continue to take the lead in 2025. 

Recent legislation demonstrates that this trend is already in motion. California’s Delete Act, which creates a universal opt-out mechanism for consumers from data brokers, sets a new standard for state privacy laws. Other states, including Delaware, Maryland, and Oregon, have passed or are considering similar measures aimed at strengthening consumer rights.

This patchwork of laws creates both opportunities and challenges. On one hand, state-level efforts are providing critical protections for residents. Opt-out rights, stricter data retention policies, and enhanced transparency requirements are all becoming more common. On the other hand, businesses now face an increasingly complex compliance landscape, with varying requirements depending on the state.

Also, many of these state laws are too weak to provide adequate protections. However, in 2025, the push for stronger state privacy laws will likely gain momentum. States are proving that meaningful change is possible even in the absence of federal action. 

Final Thoughts

While many data privacy trends may seem negative at the moment, the truth is that we are constantly seeing more awareness of the importance of securing personal data on the personal, enterprise, and state levels. Even as many Big Tech companies continue to disregard privacy, smaller companies are focusing on protecting their customers and employees in higher numbers than ever before. 

Ultimately, despite challenges, we believe the United States is headed in the right direction. The next several years will continue to make this evident to the entire world.

]]>
Understanding the Data Broker Market: Different Types of Data Brokers https://joindeleteme.com/business/blog/understanding-the-data-broker-market-different-types-of-data-brokers/ Mon, 27 Jan 2025 20:32:36 +0000 https://joindeleteme.com/?post_type=b2b-post&p=16531 There’s an open secret when it comes to your personal data. Data brokers collect and sell your information—often without your knowledge or consent. They pull details from public records, online behavior, and social media, then sell it to businesses, marketers, government agencies, and even random people online. Scammers and other malicious actors can buy this information to create more targeted scams.

With consumer privacy laws now in effect in 19 U.S. states, individuals finally have the power to opt out of certain types of data collection. It’s important for consumers to understand that they may have the option to remove their information from data broker sites and to know how to exercise that option. This article explores the three main types of data brokers, why they matter to your privacy, and how to protect your personal information.

Data broker privacy concerns

Data brokers operate without direct interaction or consent from individuals, collecting and selling personal information from various sources. Unlike companies that obtain your data through privacy policies or customer agreements, data brokers gather information in the background and profit by selling it to third parties—often without your knowledge. These third parties could be businesses, financial institutions, government agencies, or even individuals using the data for purposes like targeted advertising or identity verification.

The lack of transparency in how brokers collect, store, and share data often raises significant privacy concerns, which is why these new consumer privacy laws specifically address data brokers. Additionally, there have been some significant data breaches affecting these kinds of brokers in recent years. Just a month ago, the B2B data broker National Public Data suffered a huge breach that exposed billions of financial records, including social security numbers.  

Three types of data brokers

Within this market, there are different types of data brokers, each specializing in various forms of data collection, processing, or sale. These distinctions are important because they shape how the brokers operate, the legal frameworks they follow, and the potential risks they pose to individual privacy.

1 – “Big data” brokers (Equifax, Experian, TransUnion)

The first type consists of the “big data” brokers, many of which are well-known credit reporting agencies like Equifax, Experian, and TransUnion. While these data brokers do comply with regulations like the Fair Credit Reporting Act (FCRA), they also collect and handle huge amounts of personal and financial information. 

The primary function of these credit reporting agencies is to collect, process, and distribute personal data for purposes like credit checks, identity verification, and marketing. Their data comes from a variety of sources, including financial institutions, government databases, and loyalty programs, enabling them to create detailed consumer profiles. Historically, these companies provided their services to major financial institutions and government agencies, but they have since expanded into areas like marketing and analytics.

Consumers may not need to worry as much about opting out of big data brokers like these, primarily because their services are often necessary for functions such as applying for loans or obtaining credit cards. 

However, you’d be right to worry that potential data breaches could expose your financial data, as was the case in the Equifax breach in 2017. Opting out of these brokers is difficult, as their services are integral to the modern financial system, but ensuring that your information is correct and monitoring it regularly is advisable. If you do get a notification about a breach, make sure to check directly with the agency for next steps to protect your data. In all, these are relatively credible and operate under stringent regulations to provide essential services to the economy. 

2 – B2B data brokers (National Public Data)

The second type of data broker is the business-to-business (B2B) data broker. These organizations focus on processing and selling aggregated or anonymized data to businesses rather than directly to consumers. Some examples are employment verification services or location data analytics companies. Although they handle large volumes of personal data, they argue that the data is often anonymized, which reduces regulatory scrutiny. National Public Data is one example of this type of data broker. 

However, in many cases, malicious actors can cross-reference so-called anonymized data with information freely available online to violate individual privacy. So although B2B data brokers add value to the businesses they serve, their practices may ultimately compromise consumer privacy. 

Unfortunately, these companies often operate in a gray area of data regulation. This is especially true in the U.S., where anonymization is not always well-defined or enforced, so there may not be clear opt-out mechanisms. Still, it is worth considering opt-out options when available, especially from companies involved in mobile or location tracking, as this type of data can be misused for intrusive purposes.

3 – People search data brokers (Spokeo, Intelius, BeenVerified)

Lastly, there’s the category of “people search” data brokers, which represent the most controversial and, historically, the least-regulated category of data brokers. These brokers, such as Spokeo, Intelius, and BeenVerified, allow anyone to search for personal information on individuals. These data brokers often sell data directly to consumers, and inevitably, scammers and fraudsters as well. 

What makes these brokers particularly troubling is the lack of reporting about the sources of the data they sell, as well as the lack of accountability for the accuracy of that data. These companies may gather personal information from publicly available sources, social media profiles, public records, and sometimes even data breaches. Fortunately, recent lawsuits and scandals have shown many consumers just how far these “people search” brokers will go to violate privacy. 

What’s worse, these data brokers like to make it difficult to remove personal data. However, many will now have to allow opt-outs under new state privacy laws. Unfortunately, while the more reputable services may honor opt-out requests, others operate in loosely regulated jurisdictions or states without comprehensive consumer data privacy laws, making it challenging to enforce opt-outs. Still, consumers should focus on removing their data from these sources whenever possible to avoid identity theft, doxing, or unwanted personal exposure. 

Safeguarding PII in a data-driven world

In a world where personal data is constantly being collected and sold, understanding how to protect your privacy is crucial. To safeguard your personal data, familiarize yourself with your state’s privacy laws and learn how to opt out of data broker sites, especially those that expose sensitive information on people search platforms.

Taking proactive steps, such as conducting online searches to see where your data appears and requesting its removal, can reduce your exposure. Thanks to new privacy laws, you now have more control over your data. Use this power to protect your privacy and limit the reach of data brokers.

]]>