By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Today we are excited to release a new build of the Windows Server vNext Long-Term Servicing Channel (LTSC) release that contains both the Desktop Experience and Server Core installation options for Datacenter and Standard editions. There are some features to look for such as UDP performance improvements — UDP is becoming a very popular protocol carrying more and more networking traffic. With the QUIC protocol built on top of UDP and the increasing popularity of RTP and custom (UDP) streaming and gaming protocols, it is time to bring the performance of UDP to a level on par with TCP. In Server vNext we include the game-changing UDP Segmentation Offload (USO). USO moves most of the work required to send UDP packets from the CPU to the NIC’s specialized hardware. Complimenting USO in Server vNext we include UDP Receive Side Coalescing (UDP RSC) which coalesces packets and reduces CPU usage for UDP processing. To go along with these two new enhancements, we have made hundreds of improvements to the UDP data path both transmit and receive. TCP performance improvements — Server vNext uses TCP HyStart++ to reduce packet loss during connection startup (especially in high-speed networks) and SendTracker + RACK to reduce Retransmit TimeOuts (RTO). These features are enabled in the transport stack by default and provide a smoother network data flow with better performance at high speeds. PktMon support in TCPIP — The cross-component network diagnostics tool for Windows now has TCPIP support providing visibility into the networking stack. PktMon can be used for packet capture, packet drop detection, packet filtering, and counting for virtualization scenarios, like container networking and SDN. You're also likely to see Improved RSC in the vSwitch. RSC in the vSwitch has been improved for better performance. First released in Windows Server 2019, Receive Segment Coalescing (RSC) in the vSwitch enables packets to be coalesced and processed as one larger segment upon entry in the virtual switch. This greatly reduces the CPU cycles consumed processing each byte (Cycles/byte). However, in its original form, once traffic exited the virtual switch, it would be re-segmented for travel across the VMBus. In Windows Server vNext, segments will remain coalesced across the entire data path until processed by the intended application. Now you can keep things together or apart. When moving a role, the affinity object ensures that it can be moved. The object also looks for other objects and verifies those as well, including disks, so you can have storage affinity with virtual machines (or Roles) and Cluster Shared Volumes (storage affinity) if desired. You can add roles to multiples such as Domain controllers, for example. You can set an AntiAffinity rule so that the DCS remains in a different fault domain. You can then set an affinity rule for each of the DCS to their specific CSV drive so they can stay together. If you have SQL Server VMs that need to be on each side with a specific DC, you can set an Affinity Rule of the same fault domain between each SQL and their respective DC. Because it is now a cluster object, if you were to try and move a SQL VM from one site to another, it checks all cluster objects associated with it. It seems there is a pairing with the DC in the same site. It then sees that DC has a rule and verifies it. It seems that DC cannot be in the same fault domain as the other DC, so the move is disallowed. BitLocker has been available for Failover Clustering for quite some time. The requirement was the cluster nodes must be all in the same domain as the BitLocker key is tied to the Cluster Name Object (CNO). However, for those clusters at the edge, workgroup clusters, and multidomain clusters, Active Directory may not be present. With no Active Directory, there is no CNO. These cluster scenarios had no data-at-rest security. Starting with this Windows Server Insiders, we introduced our own BitLocker key stored locally (encrypted) for the cluster to use. This additional key will only be created when the clustered drives are BitLocker protected after cluster creation. Complete details are posted on OUR FORUM.

Windows 10’s Start Menu and Action Center could be refreshed with UI tweaks if a new code reference spotted in the preview builds has anything to go by. On August 21, Microsoft published Windows 10 Build 20197 to the testers in the Dev Channel of the Windows Insider program. This preview build comes with a new Disk Manager and bug fixes, but it also includes reference to ‘WinUI’ for Windows 10’s Start Menu and Action Center. According to the scan of Microsoft Program Database (PDB) files in Windows 10 Build 20197, Microsoft is currently testing these features internally:
WinUIOnDesktop
WinUIDesktopActionCenter
WinUIDesktopStartMenu
WinUI is Microsoft’s next-generation user interface platform for Windows, Windows 10, Windows 10X, and foldable devices like the Surface Duo. Microsoft has already confirmed that WinUI can be used to refresh Win32 apps and create new Win32 or UWP apps using the new UI principles. The Start Menu, Action Center and other modern elements are written in XAML and they use UI components from “Windows.UI.XAML”. In theory, these references suggest that Microsoft might allow Start Menu and Action Center to use UI components from “WinUI” as opposed to the current ‘Windows.UI.XAML’. The Start Menu, Action Center and other modern elements are written in XAML and they use UI components from “Windows.UI.XAML”. In theory, these references suggest that Microsoft might allow Start Menu and Action Center to use UI components from “WinUI” as opposed to the current ‘Windows.UI.XAML’ read more on our Forum

While both Apple and Google are in US and EU crosshairs, Apple is in a far more precarious position. Are iOS users ready for the pros and cons of opening Pandora's app box? This week, Apple reached a significant milestone in its nearly 45-year history: a valuation of over $2 trillion. It's the first American company to achieve that lofty status, surpassing the valuation of Saudi Aramco as a publicly traded firm. This comes only a year after reaching the $1 trillion mark, a milestone that its industry rivals Amazon, Microsoft, and Alphabet (Google) soon followed. But Apple's rise in valuation has placed the company under increased scrutiny and growing concerns about how it has been managing its developer ecosystem, notably its App Store. In May of last year, I discussed how the US Supreme Court paved the way for potential antitrust by allowing a class action suit against the company alleging monopolistic practices on its App Store to proceed. Although the ruling was not a judgment against Apple and was remanded to the lower courts -- the Court did not classify the company as a monopoly, and did not move forward with any antitrust penalty -- the decision does set a potentially damaging precedent for the company. By allowing this lawsuit to move forward, the high court's ruling opened up the possibility that there could be, at some point, antitrust proceedings against the company. All signs indicate that antitrust litigation against the company is virtually inevitable -- especially if Cupertino continues to maintain a status quo of allowing only Apple-trusted applications in its App Store and not permitting third-party payment services to be used for in-app transactions. In the last year, legal complaints against the company have increased, as have antitrust monitoring efforts by the US and European regulators. In 2019, Spotify issued a complaint to the European Union, alleging that because Apple's Music services aren't subject to the same 30% App Store transactional fees as third-party music services, it competes unfairly. Although Spotify's service can be subscribed to outside the App Store via an out-of-band browser purchase (in the same way other companies, such as Amazon, have also engaged in content purchases that bypass the App Store), Spotify argues that the 30% fee forces the firm to operate in an unfair environment, if it wants to offer subscriptions directly via the iOS app. This complaint has resulted in the EU proceeding with a formal investigation into Apple's App Store practices but has stated that it may take years to complete. In the past, the EU has fined American firms billions of dollars, such as its prior actions against Microsoft regarding browser bundling within Windows, which resulted in the company needing to build a "browser choice" screen into its operating system, and its $5B fine against Google for anticompetitive behavior in tying its search engine to Android. All of these legal activities seemed to have been pushed to the back burner given the current political climate and priorities of the Trump administration. The upcoming US elections and the COVID-19 pandemic have proven to be effective distractions. But recently, Apple has again come under scrutiny due to its interactions with Epic Games. The company made changes to its popular Fortnite game to allow for in-app transactions that do not go through Apple's App Store or Google's Play Store on their respective iOS and Android platforms. These changes resulted in the immediate removal of Fortnite from both the App Store and the Play Store, as well as a notification by Apple to Epic that its official developer accounts would be canceled at the end of the month due to violation of its developer agreements. Epic has since launched antitrust lawsuits against both Apple and Google, arguing that both of the companies are engaged in multiple violations of the Sherman Antitrust Act due to monopolistic practices. While both Apple and Google are in US and EU crosshairs, it could be argued that Apple is in a much more precarious position:  Any antitrust activity could create more significant issues for iOS platform end-users than for Android users. Why? Android already can side-load applications, which includes third-party app stores. This capability exists in the event an end-user wants to install software that either doesn't conform to the Play Store's policies (such as adult content) or that simply isn't listed in the Play Store for whatever reason. Additionally, Android is fully open source as part of the Android Open Source Project (AOSP), so there is full transparency when it comes to APIs. Only apps that use Google Mobile Services -- which are fully documented by the company and licensed to devise manufacturers (such as Samsung and Microsoft) -- are considered to be proprietary. Complete details are posted on OUR FORUM.

The Nvidia GeForce RTX 3090 is the next-generation halo card from Team Green, and it's going to be a monster. The Nvidia GeForce RTX 3090 is now confirmed as the next halo graphics card from Team Green, thanks to Micron's inadvertent posting of memory details (the PDF is now removed). With that piece of knowledge, we've dissected the rest of what we expect to find in the RTX 3090. Nvidia has a countdown to the 21st anniversary of its first GPU, the GeForce 256, slated for September 1. The battle for the best graphics cards and top of the GPU hierarchy is about to get heated. We've talked about the Nvidia Ampere and RTX 30-series as a whole elsewhere, so this discussion is focused purely on the GeForce RTX 3090. Let's dig into the details of what we know about the GeForce RTX 3090, including the expected GPU and memory specifications, release date, price, features, and more. First, the GeForce RTX 3090 branding is the first 90-series suffix we've seen since the GTX 690 back in 2012. That was a dual-GPU variant of the GTX 680, but based on the Micron documentation, RTX 3090 will still be a single GPU. Spoiler: multi-GPU support in games is practically dead, at least on life support. Why bring back the 90 brandings? Simple: It opens the door for a new tier of performance and pricing. That's not good news for our wallets. We discussed the Micron inadvertent posting of details and more in a recent Tom's Hardware show, which you can view below. Let's dig into the details. The Micron posting gives us one extremely concrete set of data. Unless Nvidia changes something between now and the unveiling, the GeForce RTX 3090 will have 12GB of GDDR6X memory clocked at somewhere between 19-21 Gbps per pin. Let's be clear: It's 21Gbps. Nvidia's GTX 1080 Ti was the first 11GB GPU, and it was a surprise. Nvidia had multiple references to build off: Turning the dial to 11, 11GB, 11Gbps clocks. The same applies to 21Gbps. This is the 21st anniversary of the GeForce 256, the "world's first GPU" according to Nvidia, who coined the GPU acronym for the occasion. There's also a 21-day countdown going on right now. Add that to the specs from Micron and 21Gbps is effectively confirmed. If I'm wrong, I'll eat my GPU hat. This is a big deal, as it's the first time a GPU will have over 1TBps of memory bandwidth while using something other than HBM2 memory. (AMD's Radeon VII has 1TBps as well, via 16GB of HBM2.) We don't have exact details on how much companies pay for HBM2 vs. GDDR6X, but there's a big premium with HBM2 — you need a silicon interposer, plus the memory itself costs more. To put this in perspective, the RTX 2080 Ti 'only' has 616GBps, so this is effectively a 64% boost in the memory performance. That leads into the rest of the GPU specs, but let's first point out that the RTX 2080 Ti has 27% more memory bandwidth than the GTX 1080 Ti. It also has 20% more theoretical computational performance (TFLOPS), and architectural updates mean it makes better use of those resources. In short, GPU TFLOPS is often scaled similarly to bandwidth. As we've already pointed out, the move to 21Gbps GDDR6X increased raw memory bandwidth by 64% relative to the RTX 2080 Ti. That means we also expect the RTX 3090 to deliver around 50-75% more computational performance. Do you know what would make for a nice target? 21 TFLOPS. Yeah, baby! How it gets it isn't critical, but there are a few options. We know from the Nvidia A100 that Ampere can reach massive sizes on TSCM's 7nm process. It's an 826mm square package, which is relatively close to the maximum reticle size — you can't make a chip physically larger than the reticle. The GA100 at the heart of the A100 also supports FP64 (64-bit floating-point) computation, which is necessary for the target market of scientific research. GeForce cards don't need FP64 and typically only have 1/32 the performance in FP64 vs. FP32 instead of the 1/2 performance found in the bigger GP100, GV100, and GA100 chips. Option one is that Nvidia strips out all the FP64 functionality, adds ray tracing RT cores in its place, and still ends up with a big chip that has up to 128 SMs. This is more or less what happened with the Pascal generation: GP100 used HBM2, GP102 used GDDR5/GDDR5X, but both had a maximum configuration of 3840 FP32 CUDA cores. Some of these would end up disabled to improve yields via binning, but if Nvidia goes with 118 SMs and 7,552 CUDA cores, then clock the chip at 1.4GHz (boost), it would have a theoretical performance of 21.1 TFLOPS.  Oh, and it uses 50W more power. Learn more about this powerhouse GPU card by visiting OUR FORUM.

Device Manager is an important tool on Windows 10 and it allows you to view installed hardware and their updates. Windows 10’s Device Manager allows users to install an updated driver by scanning Microsoft servers. Searching for an updated driver may work if the device or driver is old and outdated, and when a new update has been published on Microsoft’s legacy driver library. As we reported on Sunday, Windows 10 has removed the internet-based method of updating device drivers for those running the May 2020 Update with all patches installed. This change was made quietly last month and Microsoft has now revealed the real reason behind this move. Starting with Windows 10 KB4566782 (Build 19041.450), Microsoft says it is restoring the optional updates option in the Settings app for more users. When optional updates are detected for your device by Windows Update, they will be displayed on a new page called ‘Optional updates’. Microsoft noted that this change means that you no longer need to launch the classic Device Manager to get updated drivers from Microsoft. If you want to search for the most recent driver online, Microsoft is recommending users to use the Windows Update instead. Device Manager will also inform users that better drivers are available on Windows Update or at the manufacturer’s website, but it won’t let you download the drivers. When you’re experiencing issues with a particular device, installing optional drivers may help, according to Microsoft. As always, Windows Update will continue to check for driver updates and automatically keep your drivers updated. “We look forward to your feedback on this enhancement to the update experience, and to bringing you continued improvements that improve your experience with Windows 10 overall,” Microsoft noted. It’s also worth noting that the drivers on Windows Update or Microsoft’s driver library are often outdated. The download page of the manufacturer’s site is where you should head for updates if you want the latest drivers. Follow this and more on OUR FORUM.

A billion or more Android devices are vulnerable to hacks that can turn them into spying tools by exploiting more than 400 vulnerabilities in Qualcomm’s Snapdragon chip, researchers reported this week. The vulnerabilities can be exploited when a target downloads a video or other content that’s rendered by the chip. Targets can also be attacked by installing malicious apps that require no permissions at all. From there, attackers can monitor locations and listen to nearby audio in real-time and exfiltrate photos and videos. Exploits also make it possible to render the phone completely unresponsive. Infections can be hidden from the operating system in a way that makes disinfecting difficult. Snapdragon is what’s known as a system on a chip that provides a host of components, such as a CPU and a graphics processor. One of the functions, known as digital signal processing, or DSP, tackles a variety of tasks, including charging abilities and video, audio, augmented reality, and other multimedia functions. Phone makers can also use DSPs to run dedicated apps that enable custom features. “While DSP chips provide a relatively economical solution that allows mobile phones to provide end-users with more functionality and enable innovative features—they do come with a cost,” researchers from security firm Check Point wrote in a brief report of the vulnerabilities they discovered. “These chips introduce new attack surfaces and weak points to these mobile devices. DSP chips are much more vulnerable to risks as they are being managed as ‘Black Boxes’ since it can be very complex for anyone other than their manufacturer to review their design, functionality or code.” Qualcomm has released a fix for the flaws, but so far it hasn’t been incorporated into the Android OS or any Android device that uses Snapdragon, Check Point said. When I asked when Google might add the Qualcomm patches, a company spokesman said to check with Qualcomm. The chipmaker didn’t respond to an email asking. In a statement, Qualcomm officials said: “Regarding the Qualcomm Compute DSP vulnerability disclosed by Check Point, we worked diligently to validate the issue and make appropriate mitigations available to OEMs. We have no evidence it is currently being exploited. We encourage end-users to update their devices as patches become available and to only install applications from trusted locations such as the Google Play Store.” Check Point said that Snapdragon is included in about 40 percent of phones worldwide. With an estimated 3 billion Android devices, that amounts to more than a billion phones. In the US market, Snapdragons are embedded in around 90 percent of devices. More details are posted on OUR FORUM.