Hacker Newsnew | past | comments | ask | show | jobs | submit | mparis's commentslogin

Is there any real information backing this?

> Wang was brought in 9 months back to lead the Meta's SuperIntelligence Lab, but now looks like Zukerberg is building a parallel lab called "Reality Labs" with Bosworth

Reality Labs is there AR/VR play, no?


> Is there any real information backing this?

No.

https://news.ycombinator.com/item?id=47316505


I love the simplicity & practicality of Go, but can't get over the limited type-system.

I love the expressivity of Rust, but compile times are a problem.

Someone with some sway, please convince a hyper-scalar to support something like https://borgo-lang.github.io/. I think it may be the AST that we all need.


Seems like a great feature but am I the only one that still regularly sees the totally broken scrolling bug in CC?

I have churned from CC in favor of codex until the scrolling bug is fixed. There is no set of features that will convince me to switch back until they fix their broken UI.

I haven’t dug into the JS code base but I imagine they will have a hard time matching the performance of the rust based codex.


I haven't tried the demo but I love this idea!

Would be cool if I could somehow constrain a chord to a key then enumerate the scale degrees that I want so I can make some real funky sounds that don't fit the standard Chord Qualities.


I've been playing with the Gemini CLI w/ the gemini-pro-3 preview. First impressions are that its still not really ready for prime time within existing complex code bases. It does not follow instructions.

The pattern I keep seeing is that I ask it to iterate on a design document. It will, but then it will immediately jump into changing source files despite explicit asks to only update the plan. It may be a gemini CLI problem more than a model problem.

Also, whoever at these labs is deciding to put ASCII boxes around their inputs needs to try using their own tool for a day.

People copy and paste text in terminals. Someone at Gemini clearly thought about this as they have an annoying `ctrl-s` hotkey that you need to use for some unnecessary reason.. But they then also provide the stellar experience of copying "a line of text where you then get | random pipes | in the middle of your content".

Codex figured this out. Claude took a while but eventually figured it out. Google, you should also figure it out.

Despite model supremacy, the products still matter.


We've been running structured outputs via Claude on Bedrock in production for a year now and it works great. Give it a JSON schema, inject a '{', and sometimes do a bit of custom parsing on the response. GG

Nice to see them support it officially; however, OpenAI has officially supported this for a while but, at least historically, I have been unable to use it because it adds deterministic validation that errors on certain standard JSON Schema elements that we used. The lack of "official" support is the feature that pushed us to use Claude in the first place.

It's unclear to me that we will need "modes" for these features.

Another example: I used to think that I couldn't live without Claude Code "plan mode". Then I used Codex and asked it to write a markdown file with a todo list. A bit more typing but it works well and it's nice to be able to edit the plan directly in editor.

Agree or Disagree?


Before Claude Code shipped with plan mode, the workflow for using most coding agents was to have it create a `PLAN.md` and update/execute that plan. Planning mode was just a first class version of what users were already doing.


Claude Code keeps coming out with a lot of really nice tools that others haven't started to emulate from what I've seen.

My favorite one is going through the plan interactively. It turns it into a multiple choice / option TUI and the last choose is always reprompt that section of the plan.

I had switch back to codex recently and not being able to do my planning solely in the CLI feels like the early 1900s.

To trigger the interactive mode. Do something like:

Plan a fix for:

<Problem statement>

Please walk me through any options or questions you might have interactively.


> Give it a JSON schema, inject a '{', and sometimes do a bit of custom parsing on the response

I would hope that this is not what OpenAI/Anthropic do under the hood, because otherwise, what if one of the strings needs a lot of \escapes? Is it also supposed to newer write actual newlines in strings? It's awkward.

The ideal solution would be to have some special tokens like [object_start] [object_end] and [string_start] [string_end].


AI x Healthcare Startup | Boston, MA Onsite | Full-time | Early Engineer

We're looking for a backend-leaning fullstack dev. You will be one of the first engineers outside of the founding team. Here is a bit more about us:

We're a seed stage AI startup backed by several top tier VCs. We're on a mission to ensure patients get the coverage they deserve from their health insurance.

We’re building deep, vertically integrated technology systems to solve fundamental problems in US healthcare - the biggest market in the world ($5T). We use AWS, K8s, React, and Rust but there is no requirement to have prior experience with them specifically.

We are a founding team made up of ex-YC, Amazon, Meta, Microsoft, and Harvard Business School with previous successful exits. We have a 6+ month customer waitlist and growing insanely fast.

We're hiring our first engineer outside the founders. You'll work directly with customers to understand their needs, design the right solution, and build from zero to one. You'll own entire parts of the roadmap and tech stack while wearing multiple hats. Most important characteristics are resilience, work ethic, and curiosity. We care about slope, not where you are today.

This is an opportunity to work in an insanely fast paced, high ownership environment while solving real problems in healthcare. We're happy to share more details on the role in person/on zoom. Please fill out this form if interested!

https://wgwx7h7be0p.typeform.com/to/LV0t8OjI


Went through the form, seems like a data harvesting survey. Asks for several pieces of personal information, step by step, and then ends with saying they’ll be in contact.

No details at all about the position in that link


AI x Healthcare Startup | Boston, MA Onsite | Full-time | Early Engineer

We're looking for a backend-leaning fullstack dev. You will be one of the first engineers outside of the founding team. Here is a bit more about us: We're a seed stage AI startup backed by several top tier VCs. We're on a mission to ensure patients get the coverage they deserve from their health insurance.

We’re building deep, vertically integrated technology systems to solve fundamental problems in US healthcare - the biggest market in the world ($5T). We use AWS, K8s, React, and Rust but there is no requirement to have prior experience with them specifically. We'll teach you!

We are a founding team made up of ex-YC, Amazon, Meta, Microsoft, and Harvard Business School with previous successful exits. We have a 6+ month customer waitlist and growing insanely fast.

We're hiring our first engineer outside the founders. You'll work directly with customers to understand their needs, design the right solution, and build from zero to one. You'll own entire parts of the roadmap and tech stack while wearing multiple hats. Most important characteristics are resilience, work ethic, and curiosity. We care about slope, not where you are today.

This is an opportunity to work in an insanely fast paced, high ownership environment while solving real problems in healthcare.

We're happy to share more details on the role in person/on zoom. Please fill out this form if interested!

https://wgwx7h7be0p.typeform.com/to/LV0t8OjI


very interested. filled out the form. I wrote this about finding the right people. it's a plug, but might actually be worth a look.

https://news.ycombinator.com/item?id=44415442


I'm a recent snafu (https://docs.rs/snafu/latest/snafu/) convert over thiserror (https://docs.rs/thiserror/latest/thiserror/). You pay the cost of adding `context` calls at error sites but it leads to great error propagation and enables multiple error variants that reference the same source error type which I always had issues with in `thiserror`.

No dogma. If you want an error per module that seems like a good way to start, but for complex cases where you want to break an error down more, we'll often have an error type per function/struct/trait.


Thanks for using SNAFU! Any feedback you'd like to share?


> multiple error variants that reference the same source error type which I always had issues with in `thiserror`.

Huh?

    #[derive(Debug, thiserror::Error)]
    enum CustomError {
        #[error("failed to open a: {0}")]
        A(std::io::Error),
        #[error("failed to open b: {0}")]
        B(std::io::Error),
    }
    
    fn main() -> Result<(), CustomError> {
        std::fs::read_to_string("a").map_err(CustomError::A)?;
        std::fs::read_to_string("b").map_err(CustomError::B)?;
        Ok(())
    }
If I understand correctly, the main feature of snafu is "merely" reducing the boilerplace when adding context:

    low_level_result.context(ErrorWithContextSnafu { context })?;
    // vs
    low_level_result.map_err(|err| ErrorWithContext { err, context })?;
But to me, the win seems to small to justify the added complexity.


You certainly can use thiserror to accomplish the same goals! However, your example does a little subtle slight-of-hand that you probably didn't mean to and leaves off the enum name (or the `use` statement):

    low_level_result.context(ErrorWithContextSnafu { context })?;
    low_level_result.map_err(|err| CustomError::ErrorWithContext { err, context })?;
Other small details:

- You don't need to move the inner error yourself.

- You don't need to use a closure, which saves a few characters. This is even true in cases where you have a reference and want to store the owned value in the error:

    #[derive(Debug, Snafu)]
    struct DemoError { source: std::io::Error, filename: PathBuf }

    let filename: &Path = todo!();
    result.context(OpenFileSnafu { filename })?; // `context` will change `&Path` to `PathBuf`
- You can choose to capture certain values implicitly, such as a source file location, a backtrace, or your own custom data (the current time, a global-ish request ID, etc.)

----

As an aside:

    #[error("failed to open a: {0}")]
It is now discouraged to include the text of the inner error in the `Display` of the wrapping error. Including it leads to duplicated data when printing out chains of errors in a nicer / structured manner. SNAFU has a few types that work to undo this duplication, but it's better to avoid it in the first place.


Congrats on the launch. Seems like a natural domain for an AI tool. One nice aspect about pen testing is it only needs to work once to be useful. In other words, it can fail most of the time and no one but your CFO cares. Nice!

A few questions:

On your site it says, "MindFort can asses 1 or 100,000 page web apps seamlessly. It can also scale dynamically as your applications grow."

Can you provide more color as to what that really means? If I were actually to ask you to asses 100,000 pages what would actually happen? Is it possible for my usage to block/brown-out another customer's usage?

I'm also curious what happens if the system does detect a vulnerability. Is there any chance the bot does something dangerous with e.g. it's newly discovered escalated privileges?

Thanks and good luck!


Thanks so much!

In regards to the scale, we absolutely can assess at that scale, but it would require quite a large enterprise contract upfront, as we would need to get the required capacity from our providers.

The system is designed to safely test exploitation, and not perform destructive testing. It will traverse as far as it can, but it won't break anything along the way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: