Jump to content


  • Posts

  • Joined

  • Last visited


Everything posted by KL FC Bot

  1. Doing business today without big data would be unthinkable. Market specialists gathering information for analysis and forecasts, developers producing numerous versions of programs, and business processes at times requiring storage of gigantic amounts of files are just a few broad examples of how business rests on data — and storing such volumes of information on one’s own systems tends to be cumbersome. As a result, companies are increasingly turning to public cloud platforms such as Azure Storage or Amazon S3. Somewhere during migration to the cloud, however, a common question arises: How can you scan uploads to prevent cloud storage from becoming another source of cyberthreats? Why scan uploads at all? Not every file uploaded to the cloud comes from a trusted computer. Some may be files from clients, for example, and you can never be sure what kind of security solution, if any, they use. Some data may be transferred in automatically (e.g., files uploaded once a day from remote devices). And ultimately, you cannot rule out the possibility of attackers gaining access to the credentials of a company employee and uploading malicious files on purpose. In other words, you cannot eliminate every trace of cyberrisk. Scanning incoming files is an obvious and critical safety process. That said, we have always advocated for multilayered approaches to security as part of a defense in depth strategy. As well, incident investigations rely on knowing not only that a file contains a threat but also exactly when the threat arrived. Knowing whether the file became compromised on the client side or was replaced with malware in your cloud storage, for example, helps identify the source of the problem. Moreover, some business processes require file access for partners, contractors, or even customers. In such cases, no one can guarantee the reliability of the security mechanisms they employ, so if an incident occurs, your cloud storage will be considered, fairly or not, the source of the threat. Hardly great from a reputational point of view. How to stop cyberthreats from spreading through your file storage We recommend using Kaspersky Scan Engine to scan all incoming files in any file storage. If your data is stored in Azure Storage or Amazon S3, there are two possible use scenarios. Scenario 1: Running through Kubernetes If you use Kubernetes, a container-orchestration system for applications, then integrating Kaspersky Scan Engine for file scanning is not difficult. We provide a solution in the form of a ready-made image. Customers need only mount the container and run it. Scenario 2: Support through connectors If you don’t use Kubernetes, then you’ll need native platform support. However, that situation is not much more complicated; we provide connectors for attaching Kaspersky Scan Engine to Azure Storage or Amazon S3. All of the tools you’ll need to configure and fine-tune our engine are right in the cloud control panel. You’ll find more information about Kaspersky Scan Engine on the solution's page. View the full article
  2. In this week’s jam-packed episode of the Transatlantic Cable podcast, Jeff, Ahmed, and I tackle some prickly topics. To begin, we look at how the FBI is making some serious noise about DarkSide, offering $10 million for the capture of gang members. From there we have a look at Facebook shutting down its controversial facial recognition system. After that, it’s two stories about crypto: the first a scam having to do with Squid Games cryptocurrency and the second looking at how the mayor-elect of New York, Eric Adams, has requested his first three paychecks be payable in Bitcoin. If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below: Feds offer $10 million bounty for DarkSide info Facebook, citing societal concerns, plans to shut down facial recognition system Squid Game themed “play-to-earn” cryptocurrency explodes in value — scam warning issued New York’s next mayor wants to be paid in Bitcoin View the full article
  3. Have you disabled annoying e-mail notifications from social networks? We think that’s great! We even periodically offer advice on how to cut down on digital noise. But LinkedIn is a special case. People really do expect messages from the social network for professionals — one could be from a prospective employer or business partner, after all. But a message from LinkedIn might just as easily come from a scammer pretending to represent a legitimate company. In this post, we’re taking apart some phishing e-mails masquerading as LinkedIn notifications. “I am a bussinessman and am interested in doing business with you” On the face of it, this type of e-mail looks like a typical partnership proposal. It includes the photo, position, and company name of the potential “partner,” and even a LinkedIn logo. The message is too short, though, and one might expect the word “businessman” to be spelled correctly in a legitimate message. You may also see that the message came from “LinkediinContact” — note the extra “i” — and the sender’s address has nothing to do with LinkedIn. E-mail purportedly from LinkedIn proposing cooperation with an Arab businessman The link in the e-mail leads to a website that looks similar to the real LinkedIn login page. Phishing LinkedIn login page But the URL is far removed from LinkedIn’s, and the domain is the Turkish .tr, not .com. If the victim enters their credentials on this site, the account will soon be in the hands of the scammers. “Please send me a qoute” A similar case is this message seemingly from an importer in Beijing, asking for a quote for the delivery of goods. The notification looks convincing; the message footer includes links to view help and unsubscribe from notifications, a copyright notice, and even the actual postal address of LinkedIn’s China office. Even the sender’s address looks like the real deal. Nevertheless, we see some red flags. E-mail purportedly from LinkedIn in which a Chinese buyer requests a quote. The sender’s address looks clean, but that doesn’t mean everything’s in order For example, an article is missing in front of the word “message” in the subject line. The author may not speak fluent English, but the platform generates the subject of LinkedIn notifications automatically, so the subject can’t contain errors. If you smell a rat and do a search for the company (UVLEID), you won’t find it because it doesn’t exist. And most important, the links in the e-mail point to a suspicious address in which random words, numbers and letters have been added to the name of the social network. The domain is again wrong, as well. This time it’s .app, which app developers use. The button points to a phishing site The “LinkedIn login page,” which the link opens, has issues: a blue square covering part of the last letter in the logo, and Linkedin instead of LinkedIn (under the username and password fields). arefully check the URL of the site and the name of the social network “You appeared in 2 search this week” Links in fake notifications don’t always open fake login pages — sometimes they can lead to more unexpected places. For example, this message saying that the recipient’s profile has been viewed twice — common information for LinkedIn users to see — obviously uses bad English, but even if you miss that, a few other details should catch your attention: Unknown sender address and link to a site in a Brazilian domain With this kind of deception, if the victim misses the strange set of letters in the sender’s address or the Brazilian domain, they may well click the button and get to an unexpected site — in our case, a “how to become a millionaire” online survey. After a few redirects, we ended up at a form asking for contact information, including phone numbers. The scammers most likely use the collected numbers for phone fraud. Online survey with redirect for further data harvesting How to tell if a message from a potential partner or employer is fake Cybercriminals use phishing to steal accounts, personal data, and money, but that is no reason to stop using LinkedIn or other services. Instead, learn how to guard against phishing, and always keep these basic tips at the ready: Watch out for unexpected messages from well-known companies; Look for inconsistencies in the names and addresses of senders, as well as typos in links, the subject line, and the e-mail body; Check notifications using official apps or websites, and in the latter case, manually type in the address or open it from your bookmarks; Enter contact information, card numbers, or login credentials only after double-checking you are on the real site; Use a reliable security solution that warns you of danger and blocks phishing and fraudulent sites. View the full article
  4. Welcome back to the Community Podcasts, a mini-series on the Kaspersky Transatlantic Cable podcast. Joining me again as our co-host for this series is Anastasiya Kazakova, a Senior Public Affairs Manager who coordinates global cyber diplomacy projects at Kaspersky. As a reminder, the Community Podcasts is a short series of podcasts featuring frank cyber diplomacy conversations with cyber-heroes who unite people despite everything – growing fragmentation, confrontation, and cyber threats – there are people who build communities and unite people to work together for the common good. Why are they doing this? And are their efforts working? For our 4th episode, by Allison Pytlak, the Program Manager for Reaching Critical Will. Reaching Critical Will is the disarmament program of the Women’s International League for Peace and Freedom (WILPF), the oldest women’s peace organization in the world. Reaching Critical Will works for disarmament and arms control of many different weapon systems, the reduction of global military spending and militarism, and the investigation of gendered aspects of the impact of weapons. Allison contributes to the organization’s monitoring and analysis of disarmament processes and its research and other publications, as well as liaises with UN, government, and civil society colleagues. Over the course of our conversation, we discuss the importance of gender in the international cybersecurity landscape, working with the UN, what the future holds for her and WILPF and more. For some of the articles referenced in the podcast, check out: Why gender matters in international cyber security (WILPF and APC) Programming action: observations from small arms control for cyber peace (WILPF) Cyber Peace & Security Monitor (WILPF-RCW monitoring of OEWG meetings) plus our relevant page: https://reachingcriticalwill.org/disarmament-fora/ict System Update: Towards a Women, Peace and Cybersecurity Agenda (UNIDIR, diverse authors) Gender approaches to cyber security (UNIDIR, diverse authors) Technology and Innovation for Gender Equality (WILPF) Making Gender Visible in Digital ICTs and International Security (Sarah Shoker, University of Waterloo) View the full article
  5. In terms of daily workload, few infosec roles compare with that of a security operations center (SOC) analyst. We know this firsthand, which is why we pay special attention to developing tools that can automate or facilitate their work. Following our recent upgrade of Kaspersky CyberTrace to a full-fledged threat intelligence (TI) platform, here we demonstrate how a SOC analyst can use this tool to study the attack kill chain. For example, suppose someone uses a workstation on the corporate network to visit a website that is flagged as malicious. The company’s security solutions detect the incident, and the security information and event management (SIEM) system logs it. Ultimately, a SOC analyst armed with Kaspersky CyberTrace sees what’s going on. Identifying the attack chain In the list of discovered anomalies, the analyst sees a detection based on data from the “Malicious URL” feed and decides to take a closer look. Contextual information (IP address, hashes of malicious files associated with the address, security solution verdicts, WHOIS data, etc.) is available to the analyst directly in the feed. However, the most convenient way to analyze the attack chain is to use a graph (the View on gGraph button). Kaspersky CyberTrace: Starting the attack analysis So far we have little information: the fact of detection, the detected malicious URL, the internal IP address of the computer from which the URL was opened, and the ability to view full contextual information for the detected threat in the sidebar. That’s just a prelude to the interesting part, however. By clicking on the icon of the malicious URL, the analyst can request the known indicators associated with the address: IP addresses, additional URLs, and hashes of malicious files downloaded from the site. Related CyberTrace indicators request The next logical step is using the indicators to check for other detections in the infrastructure. Doing so couldn’t be simpler: Click any object (for example, a malicious IP address) and select Related CyberTrace Detects. Additional detections are displayed as a graph. In just one click, we can find out which user accessed the malicious IP address (or on which machine a URL query to the DNS server returned the IP address). Similarly, we can check which users have downloaded files whose hashes are present in the indicators. Related CyberTrace detects request All indicators in the screenshots represent tests and constitute an example of fairly modest incidents. In the real world, however, we might see thousands of detections, sorting through which manually, without a graphical interface, would be quite difficult. With a graphical interface, however, each point of the graph pulls all context from the threat data feeds. For convenience, the analyst can group or hide objects manually or automatically. If analyst has access to some additional sources of information, he can add indicators and mark the interrelationships. Now, the expert while studying indicator interconnections can reconstruct the full attack chain and learn that the user typed the URL of a malicious site, the DNS server returned the IP address, and a file with a known hash was downloaded from the site. Integration with Kaspersky Threat Intelligence Portal Matching detections to threat data feeds serves for analysis of one isolated incident, but what if the incident is part of a large-scale, ongoing, multiday attack? Getting historical background and context is critical for SOC analysts. For this purpose, the upgraded Kaspersky CyberTrace features integration with another of our analysis tools, Kaspersky Threat Intelligence Portal. Kaspersky Threat Intelligence Portal has access to complete cyberthreat database, built by our antimalware experts since the day one of Kaspersky. Through the Related External Indicators menu, the analyst can access with one click all of the information Kaspersky have accumulated to find out which domains and zones are associated with a malicious IP address, what other URLs were previously associated with the IP, hashes of the files that have attempted to gain access to the URL, hashes of files downloaded from that URL, which URLs sites hosted at this IP have linked to (and from which URLs it has been linked), and more. Another advantage of this integration is the ability to search for reports on APT attacks associated with a specific URL or file hash. Kaspersky Threat Intelligence Portal subscribers can readily find and download reports that mention such URLs or file hashes. None of this information was unavailable before, but getting it required manual work — finding the right hash or address, copying it, navigating to the portal. New quick and easy access to all of this information enables infosec to take appropriate timely countermeasures on attack detection, and it also simplifies incident investigation. What else can Kaspersky CyberTrace do? Kaspersky CyberTrace not only works with threat data feeds from Kaspersky, but it can also connect with third-party sources. What’s more, it has a convenient tool for comparing information in different feeds: the Supplier Intersections matrix. The matrix enables analysts to see which feeds have more data — and if one feed has no unique data, the analyst can unsubscribe. CyberTrace also supports teamwork in multiuser mode: The analyst can give colleagues access to comment on or contribute to the investigation. If necessary, analysts can unload indicators from CyberTrace, making them accessible via the URL. Such an action may be needed to, for example, add a rule for automatic blocking of indicators at the firewall level. Another useful feature is Retroscan, which analysts can use to save old logs from SIEM systems and check them later against new feeds. In other words, if analysts had insufficient data for a proper investigation at the time of incident detection, they can still carry one out retrospectively. See our CyberTrace page for more information. View the full article
  6. In the six years since the launch of Discord’s chat and VoIP service, the platform has become a popular tool for building communities of interest, especially among gamers. However, just like any other platform that hosts user-generated content, Discord can be exploited. Discord’s extensive customization options also open the door to attacks on ordinary users, both on and off the chat server. Recent research into Discord security has revealed several cyberattack scenarios linked to its chat service, some of which can be truly dangerous for users. Here’s how to protect yourself. Malware being spread through Discord Malicious files distributed through Discord represent the most obvious threat. A recent study identified several dozen types of malware. We call this threat “obvious” simply because sharing files through Discord is very easy; every file uploaded to the platform is assigned a permanent URL, formatted as follows: cdn.discordapp.com/attachments/{channel ID}/{file ID}/{file name} Most files are freely available for download by anyone with the link. The study describes a real-life attack example: a fake website offering Zoom Web conferencing client downloads. The website looks like the real one, and the malicious file is hosted on a Discord server. That gets around restrictions on downloading files from untrusted sources. The rationale is that the servers of a popular application used by millions are less likely to be blocked by antimalware solutions. The malicious “lifehack” is as obvious as are the means of combating it: High-quality security solutions look at more than just the download source to determine the level of threat a file may pose. Kaspersky tools immediately detect malicious functionality the first time a user tries to download the file, for example, and then, with the help of a cloud-based security system, let all other users know the file should be blocked. All services that permit uploads of user-generated content face issues of misuse. Free Web hosting sites see phishing pages created on them, for example, and file-sharing platforms are used for spreading Trojans. Form-filling services serve as spam channels. The list goes on. Platform owners do try to combat the abuse, but with mixed results. Discord developers also clearly need to implement at least some basic means of user protection. For example, files used on a particular chat server need not be made available to the whole world. Checking and automatically blocking known malware also seems wise. Regardless, this is Discord’s least exotic problem, and combating it is really no different than dealing with any other method of malware distribution. It is not, however, the only threat users face. Malicious bots Another recent study demonstrates how easy it is to exploit Discord’s bot system. The bots extend chat server functionality in various ways, and Discord offers a vast array of options for customizing users’ own chats. One example of chat-related malicious code was recently published on (and fairly quickly removed from) GitHub: Using mainly capabilities provided by the Discord API, the author was able to execute arbitrary code on a user’s computer. It might look something like this: A malicious chatbot launches an arbitrary program on a user’s computer after receiving a command through a Discord chat. Source In one attack scenario, malicious code relies on a locally installed Discord client to launch automatically on startup. Installing a bot from an untrusted source can lead to such an infection. The researchers also looked at another Discord misuse scenario that doesn’t rely on the user having installed a Discord client. In this case, the malware uses the chat service to communicate. Thanks to the public API, uncomplicated registration process, and basic data encryption, a backdoor can easily and conveniently use Discord to send data about the infected system to its operator and, in turn, receive commands to execute code, upload new malicious modules, and more. That kind of scenario appears quite dangerous; it greatly simplifies the work of attackers, who then do not need to create a communication interface with infected computers but can instead use something already available. At the same time, it somewhat complicates the detection of malicious activity; conversations between the backdoor and its operator look like regular user activity in a popular chat. Protection for gamers Although the aforementioned threats apply to all Discord users, they relate mainly to those who use Discord as a game add-in: for voice and text communication, streaming, collecting gaming statistics, and so on. Such use entails substantial customization and adds to users’ risks of finding and installing malicious extensions. The relaxed, seemingly safe environment actually represents further threat, increasing the success rate of social engineering techniques — bait goes down easier in a cozy chat with people you believe are your friends. We recommend following the same digital hygiene rules on Discord as you do elsewhere on the Web: Don’t click suspicious links or download obscure files; scrutinize offers that sound too good to be true; and refrain from sharing any personal or financial information. As for the Trojans and backdoors, Discord-based or simply distributed through the platform, they are not special or essentially different from other kinds of malware. Use a reliable antivirus app to stay safe, keep it running at all times — including when you install any software or add bots to a chat server — and pay attention to its warnings. Performance need not be a concern. For example, our security products include a game mode that minimizes overhead without compromising protection. View the full article
  7. For this edition of the Kaspersky Transatlantic Cable podcast, we have quite the entertaining conversation, if you ask me. To open pod 226, we discuss a $10 billion hit caused by Apple. In this story, we take a look at the business impact Apple’s app-tracking policy has had on major social networks including Facebook, Snapchat, and more. From there, we discuss Facebook’s change to Meta. Our third story takes us back to school, with a trip to Harvard, where there is a bit of tomfoolery and black hat SEO going on with the university’s self-publishing system. After that, we talk about German authorities’ exposing one of the REvil group’s major players. To close out the podcast, we have a weird story involving an Instagram hacker using hostage-style videos for scams. If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below: Apple’s app tracking policy reportedly cost social media platforms nearly $10 billion Facebook changes its company name to Meta Scammers are creating fake students on harvard.edu and using them to shill brands Suspected REvil gang insider identified Instagram hacker forces victim to make hostage-style video View the full article
  8. Everyone needs certain skills to survive in today’s digital world. Adults tend to acquire them as new technologies come along, but today’s children are practically born with a smartphone in their hand. It’s up to parents to teach them how to exist in a world of constant information bombardment. Here are seven habits that will help your children adapt to the Web. 1. Schedule time without devices When children spend a lot of time using technology, they can get addicted to it. According to researchers from the American Academy of Child & Adolescent Psychiatry, this addiction can lead to sleep problems, mood shifts, weight gain, poor self-image, and body-image issues. Experts suggest introducing children to today’s online world by gradually increasing their screen time and removing restrictions. Some tips also apply to children of any age: The simplest and most effective include not using devices close to bedtime and silencing devices overnight. You should also agree on other times when kids are not allowed to use their phone, such as during family meals. 2. Take charge of charging Although technology is advancing at lightning speed, today’s devices still run out of power quickly. You can kill two birds with one stone at bedtime by having children leave their devices charging somewhere outside of their bedroom such as in the entryway or kitchen — the device will always be charged in the morning, and your children won’t be able to watch TikTok trends right before bedtime. Keep in mind that children tend to use their devices so much during the day that by the time evening rolls around, the phone battery is probably dead. If that’s the case in your household, consider buying portable chargers for your children, and get them into the habit of taking the chargers when they’ll be out for long. 3. Pay attention to information security and more When children are immersed in the virtual world, they are susceptible to a host of dangers, both on the Web and in the real world. Start by stressing to them that they should not be staring at their phones while they’re crossing the street or walking up or down stairs. Next up is online safety including Internet threats such as scams, theft of personal data, viruses, and much more. Tell your children not to visit suspicious websites (and teach them what that means), enter passwords or any personal information there, open strange-looking links, or download apps from anywhere but the official app stores. Emphasize that they should never share personal documents, credit card information, or photos that could put them or their friends in a compromising position. It is unlikely that children will remember and follow all of those rules right off the bat. To get help, you can turn to a reliable security solution. For example, Kaspersky Internet Security protects devices from viruses, phishing, and online scams, and Kaspersky Safe Kids helps shield children from dangerous content and limit the amount of time they spend on their devices. 4. Aim for sustainable media consumption When our devices are constantly sending notifications, we can easily get overwhelmed and lose our concentration. Even adults sometimes have a hard time fighting the temptation to check messages, so you can imagine how difficult it is for kids. Limit the alerts on your children’s phones so they don’t get distracted from schoolwork or other tasks — and so they can finish their homework faster. Unfortunately, you can’t get rid of notifications from all apps on all devices at once; you need to configure them separately on phones and laptops, and every operating system has its own specific features and built-in tools for doing so. We have some posts that can help you manage notifications: How to turn off notifications in iOS and iPadOS How to configure notifications in Android Getting rid of browser notifications 5. Follow digital etiquette Just as in the real world, unspoken rules govern Internet behavior. People usually master them simply by communicating online, but children need help avoiding awkward situations, so you should discuss certain expectations with them before they go online. For example, discuss the differences between communicating over e-mail, on social networks, and in messaging apps. It’s also important to explain acceptable behavior. One rule of thumb is to ask before posting — every time — would I say this in person? Writing insults and demeaning people online is more than rude; it can be consequential. 6. Organize information Some say an organized phone or computer reflects an organized mind. A messy closet probably doesn’t really affect your child’s life, but losing passwords or files or forgetting phone numbers can be a problem. Kids should learn to organize information from an early age. Better yet, they should get in the habit of making backup copies of their most critical information. Make the most of external drives — flash drives or hard drives — or cloud storage. The latter is an important topic worth discussing separately. The cloud is a great resource, but children need to be cautious with it. They especially need to be careful not to allow just anyone access to important files. 7. Schedule a regular digital detox With digital technology infiltrating almost every aspect of children’s lives, it’s virtually impossible to avoid information overload. That means children need to be able to step away and make the Internet a less important part of their lives — first with your help and then on their own. First and foremost, limit the use of social networks — they tend to be the biggest drain on time and energy. The post “Eight steps to freedom: How to detach from social networks” has useful tips to help you and your children with this. A more effective, although also more complicated, way to combat information overload is the digital detox, when you put away your devices for a certain amount of time. For best results, do this on a regular basis. You can combine detoxes with nature excursions, exercise, or activities with friends — no devices allowed. The digital age has forced parents to confront brand-new challenges. As you deal with them, remember that you can be the best example for your children. It will be challenging to follow these rules at first, but over time they’ll become ingrained and will help your children reconnect with the world around them. View the full article
  9. Need to represent data in a way that really grabs attention? That calls for an infographic. Preferably interactive. Preferably global. And most preferable of all, encompassing the entire planet. Here are six world maps that could suck you in for hours (so don’t open them if you have urgent business to attend to). Everyone else, welcome to our list of top Internet globes. Google Earth: The one and only https://earth.google.com/ It’s scary to think that Google launched its Earth project 20 years ago. The map grew and changed, became popular, and then seemed to fall out of fashion — unjustly, it has to be said. The current version not only lets you scour any piece of land to find your home, but also now features 3D models of the planet’s top architectural monuments and geographical wonders. Anyone sick of gazing at the Sydney Opera House or the Eiffel Tower can take a computer-generated flight over the Alps or the Himalayas. The app includes virtual tours for those cooped up at home because of the pandemic, as well as handy tools for measuring distances and calculating areas. LeoLabs: Everything in orbit https://platform.leolabs.space/visualization This globe will appeal to prophets of doom and fans of conspiracy and espionage theories: The map tracks all of the satellites (and what they are turning into, i.e., space debris) currently orbiting our planet. You can zoom in and hover your mouse cursor over any object to find out its name and type (satellite, debris, or something else). Detailed satellite information, sadly, is not provided, but you can do your own online search based on names. Ventusky: Weather at your fingertips https://www.ventusky.com/ Nothing to talk about? That’s what the weather’s for! This map provides real-time visualizations of meteorological data for any location on Earth. On the left-hand side, you can select temperature, cloud cover, pressure, precipitation, humidity, air quality — anything that goes on outside. On the right, you can change the units of measurement so as not to wrestle with inches versus centimeters or Fahrenheit versus Celsius. The timeline at the bottom offers a rudimentary weather forecast. Our favorite pastime at the moment is checking the temperature in Verkhoyansk, that well-known vacation spot and an excellent data point for anyone who complains “it’s a bit chilly today.” Flightradar24: Everything about aircraft https://www.flightradar24.com/ For those frustrated with the lack of detailed satellite data in LeoLabs’ visualisation, welcome to Flightradar24. Here you can find out about almost any aircraft currently in flight or about to take off, in real time. That includes information about the airline, place of departure and destination, model, altitude, speed, and route progress. Besides being incredibly interesting, the service has practical benefits for those who like to keep everything under control. Say you’re meeting someone at the airport: Just enter the flight number on the Flightradar24 to learn the plane’s precise landing time. Flight info on the airport website is for wimps. Paid subscribers get to see a more comprehensive flight history, with aircraft serial number, vertical speed, outboard temperature, and a bunch of other stats for true aviation geeks. Incidentally, a similar map exists for seagoing vessels. And even though the Ever Given blockage has long been cleared, it’s still fascinating to watch the marine traffic through the Suez Canal. TheTrueSize: Which is bigger, Greenland or India? https://thetruesize.com/ The greatest ever illusionist is not David Blaine or your bank manager, but Gerardus Mercator. There are other ways to project a sphere onto a plane, but the world map familiar to everyone since childhood is his. Print out the map and try to stick it evenly onto a globe, however, and you’ll drift off course — and as you get closer to the poles, the size mismatch only increases. The trick is, with the Mercator projection, the horizontal dimensions in the extreme northern and southern latitudes have to be stretched, which causes Greenland and Africa to look roughly equal in size. TheTrueSize lets you take any country — from that same Mercator projection — and drag it around the map to make objective comparisons. Just type a country’s name in the search bar, and when it’s highlighted on the map, drag it to a different part of the world to see, for example, Mexico’s real size relative to Europe, or the Democratic Republic of the Congo’s to Alaska. Not recommended for users from Greenland. Earth 2050: Glimpse the future https://2050.earth/ It’s our very own predictions of the future, all in one interactive globe. Choose a planning horizon (to 2030, 2040, or 2050) and find out which fruits of progress will ripen. Check out when the first underwater farms, transformer apartments, and Martian colonies — or even (don’t hold your breath) Half-Life 3 — will appear. Some predictions come from professional futurologists, others from users. So if you feel like the map is missing something, we encourage you to share your vision. Note that submissions are moderated, so please try to keep them within the laws of physics. View the full article
  10. The recently released No Time to Die lowers the curtain on the Daniel Craig era. With that in mind, let’s run through all five of his Bond outings from a cybersecurity perspective — you’ll be shaken, but hopefully not stirred, by our findings. What unites the movies, aside from Craig himself, is a complete lack of understanding of cybersecurity basics by the movie’s MI6 employees. Whether the oversight is deliberate (highlighting the outdatedness of Bond and the whole 00 section concept) or due to the incompetence of the scriptwriters and lack of cyberconsultants is not clear. Whatever the case, here’s a look at some of the absurdities we spotted in the films, in order of appearance. Spoiler alert! Casino Royale In Craig’s first Bond movie, we see the following scene: Bond breaks into the house of his immediate superior, M, and uses her laptop to connect to some kind of spy system to find out the source of a text message sent to a villain’s phone. In reality, Bond could only do that if: MI6 does not enforce an automatic screen lock and logout policy, and M leaves her laptop permanently on and logged in; MI6 does not enforce the use of strong passwords, and M’s passwords are easily guessable; M does not know how to keep her passwords secret from her colleagues, or she uses passwords that were compromised. Any one of these scenarios spells trouble, but the third is the most likely one; a little later in the story, Bond again logs in remotely to a “secure website” using M’s credentials. Bond’s password attitude is no better. When he needs to create a password (of at least six characters) for the secret account that will hold his poker winnings, he uses the name of colleague (and love interest) Vesper. What’s more, the password is actually a mnemonic corresponding to a number (like the outdated phonewords for remembering and dialing numbers on alphanumeric keypads). It is effectively a 6-digit password, and based on a dictionary word at that. Quantum of Solace The least computerized of the last five Bond movies, Quantum of Solace nonetheless includes a moment worthy of attention here. Early in the film, we learn that Craig Mitchell, an MI6 employee of eight years — five as M’s personal bodyguard — is actually a double agent. Of course, that’s an old-school security issue rather than the cyber kind. However, M’s carelessness with passwords, as seen in the previous film, suggests MI6’s secrets may well be in the hands of cat-stroking supervillains the world over. Skyfall At the other end of the cyberspectrum lies Skyfall, the most computerized of the five. Here, information security lies at the very heart of the plot. The cybermadness is evident from scene one. For convenience, we’ll break down our analysis chronologically. Data leak in Istanbul An unknown criminal steals a laptop hard drive containing “the identity of every NATO agent embedded in terrorist organizations across the globe.” Even MI6’s partners do not know about the list (which moreover does not officially exist). The very idea of such a drive is already a massive vulnerability. Let’s assume that the database is vital to MI6 (it is). What, then, was it doing in a safe house in Istanbul, protected by just three agents? Even if the drive is, as we’re told, encrypted and alerts MI6 of any decryption attempt? Cyberterrorist attack on SIS The first real cyberincident crops up a bit later: a cyberterrorist attack on the headquarters of the British Secret Intelligence Service. The attacker tries to decrypt the stolen drive — seemingly, according to the security system, from M’s personal computer. The defenders desperately try to shut down the computer, but the evildoers blow up the SIS building on the bank of the Thames. The ensuing investigation reveals that the assailant hacked into the environmental control system, locked out the safety protocols, and turned on the gas; but before doing so, they hacked M’s files, including her calendar, and extracted codes that make decrypting the stolen drive a question of when, not if. Let’s assume the alert from the stolen drive on M’s computer represented an attempt at disinformation or trolling (after all, the drive could not have been in the building). And let’s ignore questions about the building’s gas supply — who knows, maybe MI6 corridors were lit with Jack-the-Ripper-era gas lanterns; Britain is a land of traditions, after all. In any case, hacking the engineering control systems is perfectly doable. But how did the engineering control systems and M’s computer — supposedly “the most secure computer system in Britain” — end up on the same network? This is clearly a segmentation issue. Not to mention, storing the drive decryption codes on M’s computer is another example of pure negligence. They might at least have used a password manager. Cyberbullying M The perpetrators tease M by periodically posting the names of agents in the public domain. In doing so, they are somehow able to flash their messages on her laptop. (There seems to be some kind of backdoor; otherwise how could they possibly get in?) But MI6’s experts are not interested in checking the laptop, only in tracing the source of the messages. They conclude it was sent by an asymmetrical security algorithm that bounced the signal all over the globe, through more than a thousand servers. Such tactic may exist, but what they mean by “asymmetrical security algorithm” in this context is about as clear as mud. In the real world, asymmetric encryption algorithm is a term from cryptography; it has nothing to do with hiding a message source. Insider attack on MI6 Bond locates and apprehends the hacker (a former MI6 agent by the name of Silva), and takes him and his laptop to MI6’s new headquarters, unaware that Silva is playing him. Enter Q: nominally a quartermaster, functionally MI6’s hacker-in-chief, actually a clown. Here, too, the reasoning is not entirely clear. Is he a clown because that’s funny? Or was the decision another consequence of the scriptwriters’ cybersecurity illiteracy? The first thing Q does is connect Silva’s laptop to MI6’s internal network and start talking gobbledygook, which we will try to decipher: “[Silva]’s established failsafe protocols to wipe the memory if there’s any attempt to access certain files.” But if Q knows that, then why does he continue to analyze Silva’s data on a computer with such protocols installed? What if the memory gets erased? “It’s his omega site. The most encrypted level he has. Looks like obfuscated code to conceal its true purpose. Security through obscurity.” This is basically a stream of random terms with no unifying logic. Some code is obfuscated (altered to hinder analysis) using encryption — and why not? But to run the code, something has to decipher it first, and now would be a good time to figure out what that something is. Security through obscurity is indeed a real-life approach to securing a computer system for which, instead of robust security mechanisms, security relies on making data hard for would-be attackers to puzzle out. It’s not the best practice. What exactly Q is trying to convey to viewers is less than clear. “He’s using a polymorphic engine to mutate the code. Whenever I try to gain access, it changes.” This is more nonsense. Where the code is, and how Q is trying to access it, is anyone’s guess. If he’s talking about files, there’s the risk of memory erasure (see the first point). And it’s not clear why they can’t stop this mythical engine and get rid of the “code mutation” before trying to figure it out. As for polymorphism, it’s an obsolete method of modifying malicious code when creating new copies of viruses in the strictest sense of the word. It has no place here. Visually, everything that happens on Silva’s computer is represented as a sort of spaghetti diagram of fiendish complexity sprinkled with what looks like hexadecimal code. The eagle-eyed Bond spots a familiar name swimming in the alphanumeric soup: Granborough, a disused subway station in London. He suggests using it as a key. Surely a couple of experienced intelligence officers should realize that a vital piece of information left in plain sight — right in the interface — is almost certainly a trap. Why else would an enemy leave it there? But the clueless Q enters the key without a murmur. As a result, doors open, “system security breach” messages flash, and all Q can do is turn around and ask, “Can someone tell me how the hell he got into our system?!” A few seconds later, the “expert” finally decides it might make sense to disconnect Silva’s laptop from the network. All in all, our main question is: Did the writers depict Q as a bumbling amateur on purpose, or did they just pepper the screenplay with random cybersecurity terms hoping Q would come across as a genius geek? Spectre In theory, Spectre was intended to raise the issue of the legality, ethics, and safety of the Nine Eyes global surveillance and intelligence program as an antiterrorism tool. In practice, the only downside of creating a system such as the one shown in the film is if the head of the Joint Secret Service (following the merger of MI5 and MI6) is corrupted — that is, if as before, access to the British government’s information systems is obtained by an insider villain working for Bond’s sworn enemy, Blofeld. Other potential disadvantages of such a system are not considered at all. As an addition to the insider theme, Q and Moneypenny pass classified information to the officially suspended Bond throughout the movie. Oh, and they misinform the authorities about his whereabouts. Their actions may be for the greater good, but in terms of intelligence work, they leak secret data and are guilty of professional misconduct at the very least. No Time To Die In the final Craig-era movie, MI6 secretly develops a top-secret weapon called Project Heracles, a bioweapon consisting of a swarm of nanobots that are coded to victims’ individual DNA. Using Heracles, it is possible to eliminate targets by spraying nanobots in the same room, or by introducing them into the blood of someone who is sure to come into contact with the target. The weapon is the brainchild of MI6 scientist and double agent (or triple, who’s counting?) Valdo Obruchev. Obruchev copies secret files onto a flash drive and swallows it, after which operatives (the handful who weren’t finished off in the last movie) of the now not-so-secret organization Spectre break into the lab, steal some nanobot samples and kidnap the treacherous scientist. We already know about the problems of background checks on personnel, but why is there no data loss prevention (DLP) system in a lab that develops secret weapons — especially on the computer of someone with a Russian surname, Obruchev? (Russian = villain, as everyone knows.) The movie also mentions briefly that, as a result of multiple leaks of large amounts of DNA data, the weapon can effectively be turned against anyone. Incidentally, that bit isn’t completely implausible. But then we learn that those leaks also contained data on MI6 agents, and that strains credulity. To match the leaked DNA data with that of MI6 employees, lists of those agents would have to be made publicly available. That’s a bit far-fetched. The cherry on top, meanwhile, is Blofeld’s artificial eye, which, while its owner was in a supermax prison for years, maintained an around-the-clock video link with a similar eye in one of his henchmen. Let’s be generous and assume it’s possible to miss a bioimplant in an inmate. But the eye would have to be charged regularly, which would be difficult to do discreetly in a supermax prison. What have the guards been doing? What’s more, at the finale, Blofeld is detained without the eye device, so someone must have given it to him after his arrest. Another insider? Instead of an epilogue One would like to believe all those absurdities are the result of lazy writing, not a genuine reflection of cybersecurity practice at MI6. At least, we hope the real service doesn’t leak top-secret weapons or store top-secret codes in cleartext on devices that don’t even lock automatically. In conclusion, we can only recommend the scriptwriters raise their cybersecurity awareness, for example by taking a cybersecurity course. View the full article
  11. Unidentified scammers are selling Green Passes (certificates required for travel and access to many public places and events in the European Union) on hacker forums and in Telegram channels. To demonstrate their capabilities and attract potential customers, they created a Green Pass issued in the name of Adolf Hitler. Perhaps most disturbing, the QR code passes app verification as valid. This raises a number of questions, which we will try to answer in this post. What is Green Pass? Green Pass is a certificate that verifies its owner either was vaccinated, recently recovered from COVID-19, or received a negative test result no more than 48 (for rapid test) or 72 (for PCR) hours ago. The certificate contains a QR code that can be validated with an application. Green Pass is a standard document in the countries of the European Union and some others — in Israel (where it was initially developed), Turkey, Iceland, Ukraine, Switzerland, Norway, and some others. Usually, medical institutions issue Green Pass certificates. Depending on the country, a Green Pass may be required for travel; for visiting bars, restaurants, museums, and public events; in educational institutions; and even for work. The Green Pass also exists in paper form, but most often it is an application that displays a QR code to verify the certificate. How attackers can sign fake certificates Some shady traders on the Internet and Telegram channels in particular are selling forged Green Pass certificates apparently issued by health services in Poland or France. Several theories explain how they could succeed. According to one, criminals somehow got a secret cryptographic key enabling them to issue such certificates. If that’s the case, the legitimate Green Pass certificates will probably have to be reissued. According to another theory, the sellers have accomplices in France’s and Poland’s healthcare systems. In that case, reissuing the cryptographic key is unlikely to help — law enforcement agencies will have to find the insiders. Is the entire Green Pass system compromised? For now at least, the Green Passes most EU countries issue remain as legitimate as before. Only certificates issued in Poland and France are under suspicion. Will Green Pass certificates issued in Poland and France be revoked? EU authorities are conducting investigations. In the worst case scenario, Poland and France will have to reissue certificates — but not necessarily all of them. If the malefactors cannot manipulate issue dates, then only some will have to be replaced. Can you buy a fake Green Pass? Well, there’s nothing stopping you from spending your money. However, visiting EU countries with a fake certificate is not a good idea. First, the fake certificates will be revoked, and although you’d most likely just lose some money, it is also possible customers will be caught in the same law-enforcement net as forgers. With a fake Green Pass, you have a good chance of winning a long conversation with European law enforcement agents. We have reason to believe this is far from the last fraud scheme regarding the Green Pass system. Various scams will most likely appear quite soon. However, this incident will also draw more attention from law enforcement agencies. For that and other reasons, we do not recommend getting a Green Pass from anywhere but an official European medical institution. View the full article
  12. With Dave on vacation, our APAC head of social media joins Ahmed and me for this week’s edition of the Kaspersky Transatlantic Cable podcast. A warm welcome to Jag Sharma. To kick off the conversation, we revisit the topic of REvil — again. This week, we look at the FBI’s infiltration of the ransomware gang and how the new approach differs from the usual. Although of course we discuss the news, we also debate the merits of the live-blogging the gang has been doing as well. From there, Jag gets his indoctrination by fire in one of Ahmed’s famous quizzes. Moving along, we discuss the need to secure space’s infrastructure. If everyone’s heading that way anyway, best to make it safe. Our third story takes a look at the Squid Game phenomenon and the rise of Joker-infested unofficial apps on the Play Store. The podcast closes with a story of how AI and a T-shirt led to a man getting a ticket for his automobile. No, you didn’t read that wrong – the AI really thought a woman’s T-shirt was a license plate. But hey, AI is the future, right? If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below: REvil servers shoved offline by governments – but they’ll be back, researchers say FBI, others crush REvil using ransomware gang’s favorite tactic against it Space infrastructure and cyber threats Squid Game app downloaded thousands of times was really Joker malware in disguise Driver fined after traffic camera thinks pedestrian’s shirt is a license plate View the full article
  13. Today I’m proud to announce that we have acquired a company called Brain4Net, an SD-WAN and NFV orchestration software developer. That means we’re going to significantly boost our cloud security capabilities and XDR offering. The acquisition enables us to develop reliable detection and response capabilities in the “cloud-first” paradigm by delivering our own solutions based on SD-WAN and NFV to the market. To begin, let’s talk about strengthening our solution portfolio with Secure Access Service Edge capabilities. On the market since 2015, Brain4Net spent six years developing solutions for IT automation and building software-defined networking. Now the team is joining us to build a compelling Secure Access Service Edge (SASE) solution, adding its experience and developments to help us create a unified platform by adding a network-security layer to our best-in-class security expertise. Using a single data lake and a single investigation tool across endpoint, cloud, and network data significantly accelerates security teams’ operations in threat detection and response. Distributed IT architecture is the new normal A typical enterprise IT infrastructure used to include headquarters with a central data center as well as branches that directed all of their traffic through HQ. In time, companies began migrating their infrastructure to the cloud. However, since the COVID-19 pandemic began, the speed of migration has skyrocketed, and the trend of working from anywhere is rendering traditional approaches to IT infrastructure virtually obsolete. One way to reduce expenses and streamline distributed IT operations is to adopt software-defined wide-area network (SD-WAN) technologies. Using an SD-WAN enables the construction of wide-area networks (WANs) on the principles of software-defined networking (SDN). SD-WAN solutions enable the routing of traffic through various parts of corporate networks efficiently while providing a single point for management and monitoring. They create virtual overlays using all sorts of existing networks (based on MPLS, Internet broadband, LTE, 5G, or similar), and they are less expensive and easier to deploy and manage than traditional MPLS-based WANs are. On the IT side, SD-WAN brings high performance, visibility, and corporate network agility. It also reduces maintenance costs. Our approach to distributed security Protecting distributed infrastructure requires first delivering security as a cloud computing service to the source of connection, be it a remote office or an employee working from home, and second, ensuring complete visibility of network and endpoint events. You can achieve that by adopting a SASE principle reinforced with extended detection and response (XDR). To meet market demands, network security companies are building endpoint capabilities and forging alliances with traditional security vendors. But existing security providers are unable to deliver holistic approaches to securing systems against advanced threats. Integrating external network controls into XDR developed by endpoint security vendors does not provide enough visibility into or investigation capabilities for incidents happening inside enterprise environments. That is why we chose another approach: Integrating the Brain4Net team will enable us to create a unified platform seamlessly integrating endpoint protection capabilities with network controls. Working together, the XDR platform with SASE will allow enterprises to implement a zero-trust strategy. Advantages of our own SD-WAN The move brings us into new territory and sharpens our goal of becoming a single-point provider of enterprise security both for endpoints and for networks. Kaspersky’s SASE offering means not only securing our customers, but also becoming their connectivity service provider. The step into network security enables us to upgrade our XDR proposition as well. We already provide our customers with best-in-class security software; adding a network security layer gives us an additional competitive edge. Covering all possible scenarios by filtering traffic and monitoring endpoint security incidents, and correlating network and node activity, further improves our abilities to detect and respond to complex threats. We are moving our core enterprise security business in this important direction to provide our customers with stable and cost-efficient networking services while continuing to protect them from most advanced and stealthy threats. Further plans Kaspersky is becoming more active in mergers and acquisitions, with an eye toward acquiring strong teams that can bring synergies to our core business. As an example, Brain4Net is a successful company with technologies, solutions, and paying customers, with a team and products we are very optimistic about integrating. And this is far from the final stop in our M&A journey; other deals in the pipeline stand to improve our value proposition even more. View the full article
  14. Unknown attackers have compromised several versions of a popular JavaScript library, UAParser.js, by injecting malicious code. According to statistics on the developers’ page, many projects use the library, which is downloaded 6 to 8 million times every week. Thus, this supply-chain attack is one of the largest ever known. The malefactors compromised three versions of the library: 0.7.29, 0.8.0, and 1.0.0. All users and administrators should update the libraries to versions 0.7.30, 0.8.1, and 1.0.1, respectively, as soon as possible. What UAParser.js is, and why it is so popular JavaScript developers use the UAParser.js library for parsing the User-Agent data browsers send. It is implemented on many websites and used in the software development process of various companies, including Facebook, Apple, Amazon, Microsoft, Slack, IBM, HPE, Dell, Oracle, Mozilla, and more. Moreover, some software developers use third-party instruments, such as the Karma framework for code testing, which also depend on this library, further increasing the scale of the attack by adding an additional link to the supply chain. Introduction of malicious code Attackers embedded malicious scripts into the library to download malicious code and execute it on victims’ computers, in both Linux and Windows. One module’s purpose was to mine cryptocurrency. A second was capable of stealing confidential information such as browser cookies, passwords, and operating system credentials. However, that may not be all: According to the US Cybersecurity and Infrastructure Protection Agency’s (CISA’s) warning, installing compromised libraries could allow attackers to take control of infected systems. According to GitHub users, the malware creates binary files: jsextension (in Linux) and jsextension.exe (in Windows). The presence of these files is a clear indicator of system compromise. How malicious code got into the UAParser.js library Faisal Salman, the developer of the UAParser.js project, stated that an unidentified attacker got access to his account in the npm repository and published three malicious versions of the UAParser.js library. The developer immediately added a warning to the compromised packages and contacted npm support, which quickly removed the dangerous versions. However, while the packages were online, a significant number of machines could have downloaded it. Apparently, they were online for a little more than four hours, from 14:15 to 18:23 CET on October 22. In the evening, the developer noticed unusual spam activity in his inbox — he said it alerted him to suspicious activity — and discovered the root cause of the problem. What to do if you downloaded infected libraries If you have bad versions already, immediately update your libraries to the patched versions — 0.7.30, 0.8.1, and 1.0.1. However that is not enough: According to the advisory, any computer on which an infected version of the library was installed or executed should be considered completely compromised. Therefore, users and administrators should change all credentials that were used on those computers. In general, development or build environments are convenient targets for attackers trying to organize supply-chain attacks. That means such environments urgently require antimalware protection. View the full article
  15. The signs of phishing can be obvious — a mismatch between the sender’s address and that of their supposed company, logical inconsistencies, notifications that appear to come from online services — but spotting a fake isn’t always so easy. One way to make a fake look more convincing is to tamper with the visible field containing the e-mail address. The technique is fairly uncommon in cases of mass phishing, but we see it quite a bit more in targeted messaging. If a message looks real, but you doubt the sender’s authenticity, try digging a little deeper and checking the Received header. This post describes how. Reasons to doubt Any strange request is a clear red flag. For example, an e-mail that asks you to do something outside your work role or perform any nonstandard action warrants a closer look, especially if it claims to be important (personal demand from the CEO!) or urgent (must be paid within two hours!). Those are standard phishing tricks. You should also be wary if you are asked to: Follow a link in the e-mail to an external website that requests your credentials or payment information; Download and open a file (particularly an executable file); Carry out actions related to monetary transfers or access to systems or services. How to find e-mail headers Unfortunately, the visible From field is easy to spoof. The Received header, however, should show the sender’s real domain. You can find this header in any mail client. Here, we’re using Microsoft Outlook as an example because of its widespread use in modern business. The process should not be radically different in another client, however; if you use one you can consult the help documentation or try to find the headers yourself. In Microsoft Outlook: Open the message you want to check; On the File tab, select Properties; In the Properties window that opens, find the Received field in the Internet headers section. Before reaching the recipient, an e-mail can pass through more than one intermediate node, so you may see several Received fields. You’re looking for the lowest one, which contains information about the original sender. It should look something like this: Received header How to check domain from the Received header The easiest way to make use of the Received header is to use our Threat Intelligence Portal. Some of its features are free, meaning you can use them without registering. To check the address, copy it, go to Kaspersky Threat Intelligence Portal, paste it into the search box on the Lookup tab, and click Look up. The portal will return all available information about the domain, its reputation, and WHOIS details. The output should look something like this: Information from Kaspersky Threat Intelligence Portal The very first line will probably display a “Good” verdict or “Uncategorized” sign. That just means our systems haven’t previously seen this domain used for criminal purposes. When preparing a targeted attack, attackers can register a fresh domain or use a breached legitimate domain with a good reputation. Carefully check the organization to which the domain is registered to see if it matches the one that the sender supposedly represents. An employee of a partner company in Switzerland, for example, is unlikely to send an e-mail through an unknown domain registered in Malaysia. Incidentally, it’s a good idea to use our portal to check links in the e-mail as well, if they seem dubious, and use the File Analysis tab to check any message attachments. Kaspersky Threat Intelligence Portal has lots of other useful features, but most are available only to registered users. For more information about the service, see the About the Portal tab. Protection against phishing and malicious e-mails Although checking suspicious e-mails is a good idea, keeping phishing emails from even reaching end users is better. Therefore, we always recommend installing antiphishing solutions at the corporate mail server level. Additionally, a solution with antiphishing protection running on workstations will block redirects through phishing links, in case the e-mail creators fool the recipient. View the full article
  16. System apps — installed on your smartphone by default and usually nonremovable — tend to stay out of the limelight. But whereas with other apps and services users have at least some choice, in this case tracking and surveillance capabilities are stitched into devices’ very fabric. The above represent some conclusions of a recent joint study by researchers at the University of Edinburgh, UK, and Trinity College Dublin, Ireland. They looked at smartphones from four well-known vendors to find out how much information they transmit. As a reference point, they compared the results with open-source operating systems based on Android, LineageOS and /e/OS. Here’s what they found. Research method For the purity of the experiment, the researchers set a fairly strict operating scenario for the four smartphones, one users are unlikely ever to encounter in real life: They assumed each smartphone would be used for calls and texts only; the researchers did not add any apps; only those installed by the manufacturer remained on the devices. What’s more, the imaginary user responded in the negative to all of the “Do you want to improve the service by forwarding data”–type questions that users typically have to answer the first time they turn on the device. They did not activate any optional services from the manufacturer, such as cloud storage or Find My Device. In other words, they kept the smartphones as private and in as pristine a state as possible throughout the study. The basic “spy-tracking” technology is the same in all such research. The smartphone connects to a Raspberry Pi minicomputer, which acts as a Wi-Fi access point. Software Installed on the Raspberry Pi intercepts and decrypts the data stream from the phone. The data is then re-encrypted and delivered to the recipient — the developer of the phone, app, or operating system. In essence, the authors of the paper performed a (benevolent) man-in-the-middle attack. The scheme used in the study to intercept smartphone-transmitted data. Source The good news is that all transmitted data was encrypted. The industry finally seems to have overcome its plague of devices, programs, and servers communicating in clear text, without any protection. In fact, the researchers spent a lot of time and effort deciphering and analyzing the data to figure out what exactly was being sent. After that, the researchers had relatively smooth sailing. They completely erased the data on each device and performed initial setup. Then, without logging in into a Google account, they left each smartphone on for a few days and monitored the transfer of data from it. Next, they logged in using a Google account, temporarily enabled geolocation, and went into the phone’s settings. At each stage, they monitored what data was sent and where. They tested a total of six smartphones: four with the manufacturer’s firmware and two with the LineageOS and /e/OS open-source versions of Android. Who collects the data? To absolutely no one’s surprise, the researchers found that smartphone makers were the primary collectors. All four devices running the original firmware (and a set of preinstalled programs) forwarded telemetry data, along with persistent identifiers such as the device serial number, to the manufacturer. Here, the authors of the paper delineate standard firmware from the custom builds. For example, LineageOS has an option of sending data to developers (for monitoring programs’ operational stability, for example), but disabling the option stops data transmission. On factory-standard devices, blocking the sending of data during initial setup may indeed reduce the amount of data sent, but it does not rule out data transmission entirely. Next up for receiving data are the developers of preinstalled apps. Here, too, we find an interesting nuance: According to Google’s rules, apps installed from Google Play must use a certain identifier to track user activity — Google’s Advertising ID. If you want, you can change this identifier in the phone’s settings. However, the requirement does not apply to apps the manufacturer preinstalls — which use persistent identifiers to collect a lot of data. For example, a preinstalled social network app sends data about the phone’s owner to its own servers, even if that owner has never opened it. A more interesting example: The system keyboard on one smartphone sent data about which apps were running on the phone. Several devices also came with operator apps that also collected user-related information. Finally, Google system apps warrant a separate mention. The vast majority of phones arrive with Google Play Services and the Google Play Store, and usually YouTube, Gmail, Maps, and a few others already installed. The researchers note that Google apps and services collect far more data than any other preinstalled program. The graph below shows the ratio of data sent to Google (left) and to all other telemetry recipients (right): Amount of data transferred in kilobytes per hour to different recipients of user information. On the average, Google (left) sends dozens of times more data than all other services combined. Source What data gets sent? In this section, the researchers again focus on identifiers. All data has some kind of unique code to identify the sender. Sometimes, it is a one-time code, which for privacy is the correct way to collect the statistics — for example, on the operational stability of the system — developers find useful. But there are also long-term and even persistent identifiers that violate user privacy that are also collected. For example, owners can manually change the abovementioned Google Advertising ID, but very few do so, so we can consider the identifier, which is sent to both Google and the device manufacturers, near persistent. The device serial number, the radio module’s IMEI code, and the SIM card number are persistent identifiers. With the device serial number and the IMEI code, it is possible to identify the user even after a phone number change and complete device reset. The regular transfer of information about device model, display size, and radio module firmware version is less risky in terms of privacy; that data is the same for a large number of owners of the same phone model. But user activity data in certain apps can reveal a lot about owners. Here, the researchers talk about the thin line between data required for app debugging and information that can be used to create a detailed user profile, such as for targeted ads. For example, knowing that an app is eating up battery life can be important for the developer and will ultimately benefit the user. Data on which versions of system programs are installed can determine when to download an update, which is also useful. But whether harvesting information about the exact start and end times of phone calls is worthwhile, or indeed ethical, remains in question. Another type of user data that’s frequently reported is the list of installed apps. That list can say a lot about the user, including, for example, political and religious preferences. Combining user data from different sources Despite their thorough work, the researchers were unable to obtain a complete picture of how various phone and software vendors collect and process user data. They had to make some assumptions. Assumption one: Smartphone manufacturers that collect persistent identifiers can track user activity, even if said user erases all data from the phone and replaces the SIM card. Assumption two: All market participants have the ability to exchange data and, by combining persistent and temporary IDs, plus different types of telemetry, create the fullest possible picture of users’ habits and preferences. How this actually happens — and whether developers actually exchange data, or sell it to third-party aggregators — is beyond the scope of the study. The researchers speculate on the possibility of combining data sets to create a full profile of the smartphone owner (gaid stands for Google Advertising ID). Source Takeaways The nominal winner in terms of privacy turned out to be the phone with the Android variant /e/OS, which uses its own analog of Google Play Services and didn’t transmit any data at all. The other phone with open-source firmware (LineageOS) sent information not to the developers, but to Google, because the latter’s services were installed on that phone. These services are needed for the device to operate properly — some apps and many features simply do not work, or work poorly, without Google Play Services. As for the proprietary firmware of popular manufacturers, there is little to separate them. They all collect a fairly large set of data, citing user care as the reason. They essentially ignore users’ opt-out from collecting and sending “usage data,” the authors note. Only more regulations to ensure greater consumer privacy can change that situation, and for now, only advanced users who can install a nonstandard OS (with restrictions on the use of popular software) can eliminate telemetry completely. As for security, the collection of telemetry data does not appear to pose any direct risks. The situation is radically different from third-tier smartphones, on which malware can be installed directly at the factory. The good news from the study is that data transmission is fairly secure, which at least makes it hard for outsiders to gain access. The researchers did specify one important caveat: They tested European smartphone models with localized software. Elsewhere, depending on laws and privacy regulations, situations may differ. View the full article
  17. To open the 224th episode of the Kaspersky Transatlantic Cable podcast, Ahmed, Dave, and I discuss the targeting of researchers by some state-backed hackers. We first mentioned this story a few months back, but this week we’re rekindling the debate on researchers being targeted after Twitter banned some phishing accounts. From there, we head into our first quiz — spoiler alert, Dave and I fall victim to Ahmed’s trickery. We then welcome Maria Namestnikova, head of GReAT Russia, to discuss how parents can educate their kids on using social media securely. From there, we move on to some REvil weirdness. The gang has seen the keys for its Tor sites stolen and some signs of instability. It’s since gone offline — again! For our third story, we stay with ransomware, for which US financial institutions report having paid about $600 million in the first six months of 2020. Then, it’s on to another quiz. We just can’t get enough. The next item on the docket is a teaser to a podcast coming this weekend with Allison Pytlak of the Women’s International League for Peace and Freedom (WILPF) to discuss the need for more gender diversity in infosec. To close out our podcast, we discuss a Wales school system that is enabling facial recognition for kids buying lunch. If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below: Twitter suspends accounts used to snare security researchers Tips for parents on keeping kids safe online REvil ransomware shuts down again after Tor sites were hijacked US financial institutions report major increase in ransomware payments to cybercriminals Facial recognition used to take payments from schoolchildren View the full article
  18. Social networks becoming a burden? When out-of-control social media taxes your nerves, steals your focus and distracts you from important tasks, it’s time to do a digital detox. Today we will tell you how to get it done in a few easy steps. Step 1. Thin out your feed Unfollow anyone who doesn’t contribute to your experience — a former classmate newly obsessed with Sanskrit, an old hobby group that’s basically just ads now, whatever else you simply don’t want to deal with. If you’re not getting any benefit from the content, you have no need to invite it onto your feed. If an account is one you’d rather not unfollow or unsubscribe from, try muting it instead. Social networks let you hide updates from accounts without unsubscribing. Your friends won’t even know you’ve muted them. Step 2. Centralize communications Social networks are much more than just feeds; they’re also places to stay in touch with friends, relatives, and colleagues. But if you’re talking with people on half a dozen platforms, you may be wasting lots of time checking inboxes — even if no one has written to you. Try deciding with your friends where they should contact you, and centralize your correspondence on one or two platforms. That way you’ll be able to check the others much less often with no fear of missing an important message. Step 3. Clear up your screen Have you ever picked up your phone to check the weather, and then seen the Facebook icon, opened it just for a second, and ended up wasting two hours down a rabbit hole? To keep that from happening, try moving your social media icons out of sight. For example, hide them in a folder or send them back to your third or fourth page of apps — out of sight, out of mind. Step 4. Curate notifications No matter how responsible you may be about avoiding your feed and even keeping certain icons out of sight, if a social network sends a notification about a new post, you can easily, unthinkingly press that sneaky little window and find yourself right back in the thick of things. For help concentrating on what’s important, try disabling unnecessary notifications. To learn how, check out our instructions for iOS, macOS, Windows 10, and Android. Step 5. Configure Screen Time or Digital Wellbeing Seeing exactly how many hours a day you waste roaming social networks and messaging apps can be sobering. Apps to help with self-control are easy to find, but you don’t need to download anything for a view into your digital habits: Open your smartphone’s settings and enable Screen Time (in iOS) or Digital Wellbeing (in Android). Put the widget with the statistics in a place where you’ll always see it. And if seeing statistics isn’t enough, configure the app to let you open the social network only at certain times or for a limited amount of time. Step 6. Take a break Whenever you start something new, the most important — and hardest — thing is to establish new habits. Try spending a couple of weeks avoiding the apps that consume most of your time — when you reach for one out of habit, you can try imagining you’ve gone on a hike and don’t have an Internet connection. Better yet, actually get away from the Internet if you can. Cut off the flow of information so you can reset and no longer feel like you’re missing out. Step 7. Delete the app or your profile This step is optional; the suggestions above may have helped you attain the digital freedom you’re looking for, but if not, consider the drastic measure of removing the app from your phone or even deleting your account altogether. Don’t worry — you don’t have to lose your posts, messages, or photos. Almost every social network lets you keep all of your data even if you deactivate your profile. We’ve posted instructions on how to do this for Facebook, Instagram, Snapchat, and Twitter. Step 8. Keep an eye on yourself Having freed yourself from today’s social media overload, take a sec to congratulate yourself — but keep an eye on yourself as well. It’s entirely possible your brain will try to return to old habits. If in a couple of months you find yourself online at 3 a.m. debating the pressing issues in some stranger’s post comments, just go back and repeat these simple steps. View the full article
  19. Exactly five years ago, in October 2016, our solutions first encountered a Trojan named Trickbot (aka TrickLoader or Trickster). Found mostly on home computers back then, its primary task was to steal login credentials for online banking services. In recent years, however, its creators have actively transformed the banking Trojan into a multifunctional modular tool. What’s more, Trickbot is now popular with cybercriminal groups as a delivery vehicle for injecting third-party malware into corporate infrastructure. News outlets recently reported that Trickbot’s authors have hooked up with various new partners to use the malware to infect corporate infrastructure with all kinds of additional threats, such as the Conti ransomware. Such repurposing could pose an additional danger to employees of corporate security operation centers and other cybersec experts. Some security solutions still recognize Trickbot as a banking Trojan, as per its original specialty. Therefore, infosec officers who detect it might view it as a random home-user threat that accidentally slipped into the corporate network. In fact, its presence there could indicate something far more serious — a ransomware injection attempt or even part of a targeted cyberespionage operation. Our experts were able to download modules of the Trojan from one of its C&C servers and analyze them thoroughly. What Trickbot can do now The modern Trickbot’s main objective is to penetrate and spread on local networks. Its operators can then use it for various tasks — from reselling access to the corporate infrastructure to third-party attackers, to stealing sensitive data. Here’s what the malware can now do: Harvest usernames, password hashes and other information useful for lateral movement in the network from Active Directory and the registry; Intercept web traffic on the infected computer; Provide remote device control via the VNC protocol; Steal cookies from browsers; Extract login credentials from the registry, the databases of various applications and configuration files, as well as steal private keys, SSL certificates and data files for cryptocurrency wallets; Intercept autofill data from browsers and information that users input into forms on web-sites; Scan files on FTP and SFTP servers; Embed malicious scripts in web pages; Redirect browser traffic through a local proxy; Hijack APIs responsible for certificate chain verification so as to spoof the verification results; Collect Outlook profile credentials, intercept e-mails in Outlook and send spam through it; Search for the OWA service and brute-force it; Gain low-level access to hardware; Provide access to the computer at the hardware level; Scan domains for vulnerabilities; Find addresses of SQL servers and execute search queries on them; Spread through the EternalRomance and EternalBlue exploits; Create VPN connections. A detailed description of the modules and indicators of compromise can be found in our Securelist post. How to guard against the Trickbot Trojan The statistics show that the majority of Trickbot detections this year were registered in the US, Australia, China, Mexico and France. This does not mean, however, that other regions are safe, especially considering the readiness of its creators to collaborate with other cybercriminals. To prevent your company from falling victim to this Trojan, we recommend that you equip all Internet-facing devices with a high-quality security solution. In addition, it’s a good idea to use cyberthreat monitoring services to detect suspicious activity in the company’s infrastructure. View the full article
  20. Often, employees of security operation centers and information security departments turn to Kaspersky specialists for expert help. We analyzed the most common reasons for such requests and created a specialized service that helps customers to ask a question directly to an expert in the area they need. Why you might need expert help The threat of cyberattacks is growing all the time as cybercriminals find ever more ways to achieve their goals, discovering new hardware and software vulnerabilities in applications, servers, VPN gateways, and operating systems and immediately weaponizing them. Hundreds of thousands of new malware samples emerge every day, and a wide variety of organizations, including major corporations and even government agencies, fall prey to ransomware attacks. In addition, new sophisticated threat and APT campaigns are also unearthed regularly. In this setting, threat intelligence (TI) plays a vital role. Only with timely information about attackers’ tools and tactics is it possible to build an adequate protection system, and, in the event of an incident, to conduct an effective investigation, detect intruders in the network, send them packing, and determine the primary attack vector to prevent a repeat attack. Applying TI in a given organization requires having a qualified in-house specialist who can use TI provider data in practice. That expert thus becomes the most valuable asset in any threat investigation. That said, hiring, training and keeping cybersecurity analysts is expensive, and not every company can afford to maintain a team of experts. Frequently asked questions Several departments at Kaspersky help clients deal with cyberincidents. Briefly, they are the Global Research & Analysis Team (GReAT), the Global Emergency Response Team (GERT), and the Kaspersky Threat Research Team. In all, we have brought together more than 250 world-class analysts and experts. The teams regularly receive lots of client requests regarding cyberthreats. Having analyzed the recent requests, we identified the following categories. Analysis of malware or suspicious software A scenario we encounter pretty frequently involves the triggering of detection logic in endpoint security or threat hunting rules. The company’s security service or SOC investigates the alert, finds a malicious or suspicious object but lacks the resources to conduct a detailed study. The company then asks our experts to determine the functionality of the detected object, how dangerous it is, and how to make sure the incident is resolved after its removal. If our experts can quickly identify what the client sent (we have a gigantic knowledge base of typical attacker tools and more than a billion unique malware samples), they answer immediately. Otherwise, our analysts need to investigate, and in complex cases, that can take a while. Additional information about indicators of compromise Most companies use a variety of sources for indicators of compromise (IoCs). The value of IoCs lies largely in the availability of context — that is, additional information about the indicator and its significance. That context is not always available, however. So, having detected a certain IoC in, say, the SIEM system, SOC analysts might see the presence of a trigger and realize an incident is possible but lack the information to investigate further. In such cases, they can send a request to us to provide information about the detected IoC, and in many cases such IoCs turn out to be interesting. For example, we once received an IP address that was found in a company’s traffic feed (i.e., accessed from the corporate network). Among the things hosted at the address was a software management server called Cobalt Strike, a powerful remote administration tool (or, simply, a backdoor), that all sorts of cybercriminals use. Its detection almost certainly means the company is already under attack (real or training). Our experts provided additional information about the tool and recommended initiating incident response (IR) immediately to neutralize the threat and determine the root cause of the compromise. Request for data on tactics, techniques, and procedures IoCs are by no means all a company needs to stop an attack or investigate an incident. Once the cybercriminal group behind the attack has been determined, SOC analysts typically require data on the group’s tactics, techniques, and procedures (TTPs); they need detailed descriptions of the group’s modus operandi to help determine where and how the attackers could have penetrated the infrastructure, the information on methods attackers typically use to become entrenched in the network, as well as on how they exfiltrate data. We provide this information as part of our Threat Intelligence Reporting service. Cybercriminals’ methods, even within the same group, can be very diverse, and describing all possible details is not feasible, even in a highly detailed report. Therefore, TI clients who use our APT and crimeware threat reports sometimes request additional information from us about a particular aspect of an attack technique in a specific context of relevance to the client. We have been providing those sorts of answers, and many others, through special services or within the limited framework of technical support. However, observing a rise in the number of requests and understanding the value of our research units’ expertise and knowledge, we decided to launch a dedicated service called Kaspersky Ask the Analyst, offering quick access to our expert advice through a single point of entry. Kaspersky Ask the Analyst Our new service enables clients’ representatives (primarily SOC analysts and infosec employees) to get advice from Kaspersky experts, thereby slashing their investigation costs. We understand the importance of timely threat information; therefore, we have an SLA in place for all types of requests. With Kaspersky Ask the Analyst, infosec specialists can: Receive additional data from Kaspersky Threat Intelligence reports, including extended IoC and analytics context from GReAT and the Kaspersky Threat Research Team. Depending on your precise situation, they will discuss any connections between the indicators detected at your company with the activity described in the reports; Get a detailed analysis of the behavior of the identified samples, determine their purpose, and get recommendations for mitigating the consequences of the attack. The Kaspersky Global Emergency Response Team’s incident response experts will help with the task; Obtain a description of a specific malware family (for example, a particular piece of ransomware) and advice on protecting against it, plus additional context for specific IoCs (hashes, URLs, IP addresses) to help prioritize alerts or incidents involving them. Kaspersky Threat Research experts provide this information; Receive a description of specific vulnerabilities and their severity levels, as well as information about how Kaspersky products guard against exploitation. Kaspersky Threat Research experts likewise provide this data; Request an individual investigation (search) of dark web data. This will provide valuable information about relevant threats, which in turn suggests effective measures for preventing or mitigating cyberattacks. Kaspersky Security Services experts carry out the investigation. You’ll find more information about these services on our website. View the full article
  21. A recent review of five entry-level mobile phones retailing for about $10–$20 examined their security in detail. Commonly referred to as “feature phones” or “granny phones” — and often procured for elderly relatives either unwilling or unable to get used to smartphones — such phones can also be “just in case” spares. Some people also believe they are safer than Android-powered smartphones. Well, the reviewer refuted that last bit. He discovered hidden functions in four out of the five phones: Two transmit data at first power up (leaking the new owner’s personal information), and the other two not only leak private data, but can also subscribe the user to paid content by secretly communicating over the Internet with a command server. Infected granny phones The study author offers information about the methods used to analyze these simple devices’ firmware, the technicalities of which may be interesting to those willing to repeat the same analysis. However, let’s get straight to the findings. Out of the five phones, two send the user’s data somewhere the first time they’re powered on. To whom the data goes — manufacturer, distributor, firmware developer, or somebody else — is not clear. Neither is it clear how the data may be used. It could be assumed that such data might be useful to monitor sales or control the distribution of batches of products in different countries. To be clear, it doesn’t sound very dangerous; and after all, every smartphone transmits some telemetry data. Remember, however, that all major smartphone manufacturers at least try to anonymize the data they collect, and its destination is usually more or less clear. In this case, however, nothing is known about who is collecting owners’ sensitive information without their consent. For example, one of the phones transmits not only its serial number, country of activation, firmware info, and language, but also the base station identifier, handy for establishing the user’s approximate location. Moreover, the server collecting the data has no protection whatsoever, so the information is basically up for grabs. One more subtlety: The transmission takes place over the Internet. To be clear, a feature phone user may not even be aware that the device can go online. So, apart from anything else, the covert actions may result in surprise mobile traffic charges. Another phone from the review group, apart from leaking user data, was programmed to steal money from its owner. According to firmware analysis, the phone contacted the command server over the Internet and executed its instructions, including sending hidden text messages to paid numbers. The next phone model had even more advanced malicious functionality. According to one actual phone user, a total stranger used the phone number to sign up for Telegram. How could that have happened? Signing up for almost any messaging app means providing a phone number to which a confirmation code is sent by SMS. It seems, however, the phone can intercept this message and forward the confirmation code to a C&C server, all the while concealing the activity from the owner. Whereas the previous examples involved little more than unforeseen expense, this scenario threatens real legal problems, for example should the account be used for any criminal activities. What should I do now that I know push-button phones are unsafe? The difference between modern low-end phones and their counterparts of 10 years ago is that now, even dirt-cheap circuitry can include Internet access. Even with an otherwise clean device, this may prove an unpleasant discovery: a phone chosen specifically for its inability to connect to the Internet goes online anyway. Earlier, the same researcher analyzed another push-button phone. Although he found no malicious functionality, the device had a menu of paid subscriptions for horoscopes and demo games, the full versions of which the user could unlock — and pay for — with a text. In other words, your elderly relative or child could press the wrong button on a phone purchased specifically for its lack of Internet and apps and end up paying for the mistake. What makes this “infected” mobiles story important is that it’s often the manufacturer or a dealer back in China adding the “extra features,” so local distributors may not even be aware of the problem. Another complicating factor is that push-button phones come in small batches in a multitude of different models, and it is hard to tell a normal phone from a compromised one, unless one can thoroughly investigate firmware. Clearly, not all distributors can afford adequate firmware control. It might be easier just to buy a smartphone. Of course, that depends on budget, and unfortunately, cheaper smartphones may have similar malware issues. But if you can afford one — even a very simple one — from a major manufacturer, it could prove a safer choice, especially if your reason for choosing a push-button device is that you’re looking for something simple, reliable, and free of hidden functions. You can mitigate Android risks with a reliable antivirus app; feature phones offer no such control. As for elderly relatives, if they’re used to answering calls by opening their flip phone, adapting to a touch screen may prove next to impossible, but upgrading is worth a try in our opinion. Plenty of older folks have switched to smartphones easily enough and can now happily experience the wide world of mobile computing. View the full article
  22. During the latest Patch Tuesday, Microsoft closed a total of 71 vulnerabilities. The most dangerous of them is CVE-2021-40449, a use-after-free vulnerability in the Win32k driver that cybercriminals are already exploiting. In addition to that, Microsoft closed three serious vulnerabilities already known to the public. For now, Microsoft experts consider their probability of exploitation as “less likely.” However, security experts are actively discussing those vulnerabilities, and proofs-of-concept are available on the Internet — and therefore, someone may try to use one. Microsoft Windows kernel vulnerability CVE-2021-41335, the most dangerous of those three vulnerabilities, rates a 7.8 on the CVSS scale. Contained in the Microsoft Windows kernel, it allows for the privilege escalation of a potentially malicious process. Bypassing Windows AppContainer The second vulnerability, CVE-2021-41338, involves bypassing the restrictions of the Windows AppContainer environment, which protects applications and processes. If certain conditions are met, an unauthorized person can exploit it thanks to default Windows Filtering Platform rules. As a result, it can lead to privilege escalation. Members of Google Project Zero discovered the vulnerability in July and reported it to Microsoft, giving the company a 90-day deadline to fix it and ultimately publishing proof of concept in the public domain. The vulnerability has a CVSS rating of 5.5. Windows DNS Server vulnerability Vulnerability CVE-2021-40469 applies only to Microsoft Windows machines running as DNS servers. However, all current server versions of the operating system, starting with Server 2008 and up to the recently released Server 2022, are vulnerable. CVE-2021-40469 allows remote code execution on the server and has a rating of 7.2 on the CVSS scale. How to protect your company The results of our Incident Response Analyst Report 2021, which our Incident Response colleagues produced, indicate that vulnerabilities remain popular initial attack vectors. Moreover, the vulnerabilities aren’t necessarily the most recent — the main threat here is not zero-day vulnerabilities, but delays in the installation of updates in general. Therefore, we always recommend installing updates on all connected devices as soon as possible. Updating is especially important for critical applications such as operating systems, browsers, and security solutions. To protect your company from attacks using yet-unknown vulnerabilities, use security solutions with proactive protection technologies that can detect zero-day exploits. View the full article
  23. We kick off the Transatlantic Cable podcast this week with the recent Twitch data breach. Details are still scarce, but the topic is on the collective lips of the infosec community. From there, Jeff, Ahmed, and Dave move on to Facebook’s decision to crack down on its marketplace sales of Amazonian rainforest plots. How that will work in practice remains to be seen. Moving on, we talk about Google’s recent decision to send out authenticator keys to more than 10,000 people it identified as hacking risks. Our final story involves the FBI, submarine plans, and cryptocurrency. If you liked what you heard, please consider subscribing and sharing with your friends. For more information on the stories we covered, see the links below: Twitch gets gutted: All source code leaked Facebook to act on illegal sale of Amazon rainforest Google gives security keys to 10,000 high-risk users US nuke sub plans leaked on SD card hidden in peanut butter sandwich, claims FBI View the full article
  24. Our Behavioral Detection Engine and Exploit Prevention technologies recently detected the exploitation of a vulnerability in the Win32k kernel driver, leading to an investigation of the entire cybercriminal operation behind the exploitation. We reported the vulnerability (CVE-2021-40449) to Microsoft, and the company patched it in a regular update released on October 12. Therefore, as usual after Patch Tuesday, we recommend updating Microsoft Windows as soon as possible. What CVE-2021-40449 was used for CVE-2021-40449 is a use-after-free vulnerability in the NtGdiResetDC function of the Win32k driver. A detailed technical description is available in our Securelist post, but, in short, the vulnerability can lead to leakage of kernel module addresses in the computer’s memory. Cybercriminals then use the leak to elevate the privileges of another malicious process. Through privilege escalation, attackers were able to download and launch MysterySnail, a Remote Access Trojan (RAT) that gives attackers access to the victim’s system. What MysterySnail does The Trojan begins by gathering information about the infected system and sends it to the C&C server. Then, through MysterySnail, the attackers can issue various commands. For example, they can create, read, or delete a specific file; create or delete a process; get a directory list; or open a proxy channel and send data through it. MysterySnail’s other features include the ability to view the list of connected drives, to monitor the connection of external drives in the background, and more. The Trojan can also launch the cmd.exe interactive shell (by copying the cmd.exe file to a temporary folder under a different name). Attacks through CVE-2021-40449 The exploit for this vulnerability covers a string of operating systems in the Microsoft Windows family: Vista, 7, 8, 8.1, Server 2008, Server 2008 R2, Server 2012, Server 2012 R2, Windows 10 (build 14393), Server 2016 (build 14393), 10 (build 17763), and Server 2019 (build 17763). According to our experts, the exploit exists specifically to escalate privileges on server versions of the OS. After detecting the threat, our experts established that the exploit and the MysterySnail malware it loads into the system have seen wide use in espionage operations against IT companies, diplomatic organizations, and companies working for the defense industry. Thanks to the Kaspersky Threat Attribution Engine, our experts were able to find similarities in the code and functionality of MysterySnail and malware used by the IronHusky group. Moreover, a Chinese-language APT group used some of the MysterySnail’s C&C server addresses in 2012. For more information about the attack, including a detailed description of the exploit and indicators of compromise, see our Securelist post. How to stay safe Start by installing the latest patches from Microsoft, and avoid being hit by future zero-day vulnerabilities by installing robust security solutions that proactively detect and stop exploitation of vulnerabilities on all computers with Internet access. Behavioral Detection Engine and Exploit Prevention technologies, such as those in Kaspersky Endpoint Security for Business, detected CVE-2021-40449. View the full article
  • Create New...