💰 U.S. Cuts Deep into Science and Medicine Grants
A crucial analysis of how policy shifts can reshape scientific ecosystems—highlighting the risks of politicization and the potential erosion of U.S. research leadership.
A New York Times investigation reveals that the Trump administration’s 2025 reforms to federal science funding drastically reduced competitive grants from the NIH and NSF. By switching to lump-sum funding for multi-year projects, the total number of grants shrank, with particularly severe cuts to diversity-focused and early-career programs. Both agencies experienced major drops in new and renewed awards across nearly all scientific fields, while fellowships for young researchers fell by about one-third. Officials justified the move as an effort to streamline spending and prioritize ‘science-driven’ goals.
🔗 Read more 🔗
🎵 Programming Languages for Music
🔗 Read more 🔗
⚛️ Is Practical Quantum Computing Finally Near?
A thoughtful and balanced reflection on the state of quantum computing—combining scientific rigor with a realistic view of its current limitations and potential.
Scott Aaronson revisits the question of whether practical quantum computing is close at hand, following insights from the Q2B conference. He notes impressive progress from Google, Quantinuum, and QuEra, with qubit fidelity surpassing fault-tolerance thresholds. While confident in the robustness of quantum theory, he remains skeptical of overhyped claims and warns that analyses of cryptography-breaking potential may soon be restricted due to growing security risks.
🔗 Read more 🔗
🌿 Nature Programming Language
🔗 Read more 🔗
🤖 Structured Outputs and the Illusion of Confidence
A sharp, well-argued piece that reminds developers to balance precision with flexibility in LLM design. Particularly relevant for engineers using JSON-based APIs or schema-constrained decoding in production AI systems.
Boundary ML explains how enforcing structured outputs in large language models can harm response quality by prioritizing format correctness over reasoning and adaptability. Using GPT-5.2 examples, the article shows that constrained decoding leads to seemingly valid but shallow results that mask underlying errors.
🔗 Read more 🔗
