AIStory.News
AIStory.News
HomeAbout UsFAQContact Us
HomeAbout UsFAQAI & Big TechAI Ethics & RegulationAI in SocietyAI Startups & CompaniesAI Tools & PlatformsGenerative AI
AiStory.News

Daily AI news — models, research, safety, tools, and infrastructure. Concise. Curated.

Editorial

  • Publishing Principles
  • Ethics Policy
  • Corrections Policy
  • Actionable Feedback Policy

Governance

  • Ownership & Funding
  • Diversity Policy
  • Diversity Staffing Report
  • DEI Policy

Company

  • About Us
  • Contact Us

Legal

  • Privacy Policy
  • Cookie Policy
  • Terms & Conditions

© 2025 Safi IT Consulting

Sitemap

PowerToys on-device AI adds Ollama local model support

Nov 20, 2025

Advertisement
Advertisement

Microsoft updated Windows 11’s PowerToys, adding PowerToys on-device AI to the Advanced Paste utility through Ollama and Foundry Local. The change lets users run translations and summaries locally, reducing cloud dependency and removing API credit requirements.

Moreover, The 0.96 release routes prompts to local engines instead of defaulting to online providers. According to Microsoft’s partners, supported paths include Microsoft’s Foundry Local tool and the open-source Ollama runtime. As a result, Advanced Paste can perform common clipboard tasks entirely on your PC’s NPU.

PowerToys on-device AI update explained

Furthermore, The update shifts core operations to on-device inference for supported tasks. Therefore, text conversion and lightweight transformations can complete without sending data to the cloud. The Verge first reported the change and highlighted benefits like privacy, speed, and cost control. Additionally, the tool keeps your clipboard data on the machine during local runs.

Therefore, Advanced Paste still supports online models for heavier lifts. You can point it to Azure OpenAI, Gemini, or Mistral when you need larger context windows or specific capabilities. Meanwhile, you can choose local routes for quick redactions or summaries. This flexibility helps teams balance latency, privacy, and quality. Companies adopt PowerToys on-device AI to improve efficiency.

Consequently, Microsoft also tweaked the interface. A new indicator shows the current clipboard type before you run a prompt. Consequently, users can verify what they will paste and avoid format surprises.

local ai in powertoys How Ollama local models fit in

As a result, Ollama provides a simple way to run local large language models and small language models. With this update, Advanced Paste can send prompts to Ollama for processing on your device. In practice, that means you can translate, summarize, or reformat a copied block without invoking a cloud API. You control the model, system prompts, and memory footprint.

In addition, Because Ollama supports a range of open models, developers gain choice. You might use a compact model for speed and battery life. Alternatively, you can run a larger model when your hardware allows. This approach aligns with a broader move toward open, modular AI stacks on the desktop. Experts track PowerToys on-device AI trends closely.

Additionally, Readers can explore how Ollama manages local models and tooling on its site. The project documents model pulls, prompt templates, and hardware considerations. As a result, newcomers can adopt local inference with minimal setup and iterate quickly.

advanced paste local ai Foundry Local tool and enterprise needs

For example, Microsoft’s Foundry Local tool offers another path for on-device inference. While details vary by deployment, the value proposition mirrors Ollama’s privacy and control. Enterprises can comply with internal data policies while keeping sensitive text on managed endpoints. Furthermore, local execution can reduce latency for repetitive clipboard tasks in high-volume workflows.

For instance, Security teams often favor predictable data flows. With on-device routes, administrators can audit model sources and configuration files. They can also gate online access to approved providers. Therefore, the Advanced Paste integration may ease AI adoption inside regulated environments. PowerToys on-device AI transforms operations.

Advanced Paste update highlights

  • Meanwhile, Local processing via Foundry Local and Ollama reduces data egress.
  • In contrast, Optional online models include Azure OpenAI, Gemini, and Mistral for complex tasks.
  • On the other hand, A UI change surfaces clipboard context before execution.
  • No API credits are required for supported local operations.

These changes extend PowerToys’ mission to streamline everyday tasks. Clipboard work is a frequent friction point across roles. Consequently, automating quick transformations with low overhead can save meaningful time at scale.

Why on-device NPU processing matters

Windows PCs increasingly ship with NPUs that accelerate AI workloads. When tools use the NPU, they can maintain performance with lower power draw. In turn, users see faster responses without constant network use. This balance is crucial for laptops that must run long hours on battery.

Privacy remains another driver. Many teams cannot send snippets off the device due to policy. On-device AI allows them to benefit from generative tools within those constraints. Moreover, hybrid routing lets them escalate to cloud models when a task exceeds local limits. Industry leaders leverage PowerToys on-device AI.

What this means for open tooling

Integrating Ollama reflects growing interest in open, portable AI runtimes. Developers want to mix and match models, tokenizers, and serving layers. Therefore, desktop tools that speak to open runners unlock broader experimentation. Teams can standardize on local pipelines and keep options open for future models.

This change may also encourage model right-sizing. Instead of sending every task to a large cloud model, users can pick a suitable local model first. Additionally, a fallback to online providers stays available for complex prompts. Over time, this pattern could reduce costs and improve reliability.

Limitations and what to watch

Local AI has practical limits tied to memory, compute, and thermals. Heavier prompts or long contexts may still require cloud models. Users should expect trade-offs in accuracy and speed, depending on model choice. Still, many clipboard tasks are short and structured, which suits local inference well. Companies adopt PowerToys on-device AI to improve efficiency.

Model governance matters, too. Teams should track which models they install and how they update them. Clear policies reduce drift and unexpected behavior. Meanwhile, Microsoft’s UI hints aim to keep users aware of context before they run a transformation.

How to try it

Power users can download the latest PowerToys build from the project’s GitHub repository. After updating, configure Advanced Paste to point at Ollama or Foundry Local. Then test routine prompts like summarize, translate, or convert to Markdown. If you need larger models, connect an approved cloud provider instead.

For developers, Ollama’s quick-start guides explain model pulls and prompt execution. Mistral, Azure OpenAI, and other providers document their APIs and best practices. Consequently, you can build a hybrid workflow that selects the optimal path for each job. Experts track PowerToys on-device AI trends closely.

Conclusion: a practical step for local AI

Microsoft’s decision to route Advanced Paste through local engines advances practical, privacy-aware AI on Windows. The integration with Ollama and Foundry Local strengthens user control while keeping flexibility for cloud scale. As desktop NPUs spread, expect more utilities to follow this hybrid pattern and give users choice at every step.

For deeper details on the update and configuration options, see reporting by The Verge, explore the PowerToys code and releases on GitHub, review Ollama’s local model documentation, learn about Azure OpenAI’s setup, and check Mistral’s offerings for online model routing.

External resources with more background: The Verge on the PowerToys update, PowerToys on GitHub, Ollama documentation, Azure OpenAI Service docs, and Mistral AI. PowerToys on-device AI transforms operations.

Advertisement
Advertisement
Advertisement
  1. Home/
  2. Article