Hacker Newsnew | past | comments | ask | show | jobs | submit | Aeolus98's commentslogin

No matter what, you'll need to clear a certain volume floor. Masks are expensive [0] especially for nicer process nodes.

Once you have a mask set and fab time, it's off to the races. IMO $1M really isn't bad for a simple chip run.

[0] https://anysilicon.com/semiconductor-wafer-mask-costs/


I might be able to weigh in here.

Having these tools as open source and freely available is a huge deal for so many industries. I've worked with these tools at an academic level and now at a startup, and it's amazing the magnitude of this enabling technology. Just the tooling investment will be huge, making the core solvers and algorithms more accessible should spawn a whole new wave of startups/research in effectivley employing them. Just these days, I've heard of my friends building theorem provers for EVM bytecode to formally check smart contracts to eliminate bugs like these [0].

These synthesis tools roughly break down like this:

1. Specify your "program"

- In EDA tools, your program is specified in Verilog/VHDL and turns into a netlist, the actual wiring of the gates together.

- In 3D printers, your "program" is the CAD model, which can be represented as a series of piecewise triple integrals

- In some robots, your program is the set of goals you'd like to accomplish

In this stage, it's representation and user friendliness that is king. CAD programs make intuitive sense, and have the expressive power to be able to describe almost anything. Industrial tools will leverage this high-level representation for a variety of uses, like in the CAD of an airplane, checking if maintenance techs can physically reach every screw, or in EDA providing enough information for simulation of the chip or high-level compilation (Chisel)

2. Restructure things until you get to a an NP-complete problem, ideally in the form "Minimize cost subject to some constraints". The result of this optimization can be used to construct a valid program in a lower-level language.

- In EDA, this problem looks like "minimize the silicon die area used and layers used and power used subject to the timing requirements of the original Verilog", where the low level representation is the physical realization of the chip

- In 3D printers it's something like "minimize time spent printing subject to it being possible to print with the desired infill". Support generation and other things can be rolled in to this to make it possible to print.

Here, fun pieces of software in this field of optimization are used; Things like Clasp for Answer Set Programming, Gurobi/CPLEX for Mixed Integer programming or Linear programs, SMT/SAT solvers like Z3 or CVC4 for formal logic proving.

A lot of engineering work goes into these solvers, with domain specific extensions driving a lot of progress[1]. We owe a substantial debt to the researchers and industries that have developed solving strategies for these problems, it makes up a significant amount of why we can have nice things, from what frequencies your phone uses [2], to how the NBA decides to schedule basketball games. This is the stuff that really helps to have as public knowledge. The solvers at their base are quite good, but seeding them with the right domain-specific heuristics makes so many classes of real-world problems solvable.

3. Extract your solution and generate code

- I'm not sure what this looks like in EDA, my rough guess is a physical layout or mask set with the proper fuckyness to account for the strange effects at that small of a scale.

- For 3D printers, this is the emitted G-code

- For robots, it's a full motion plan that results in all goals being completed in an efficient manner.

[0] https://hackernoon.com/what-caused-the-latest-100-million-et...

[1] https://slideplayer.com/slide/11885400/

[2] https://www.youtube.com/watch?v=Xz-jNQnToA0&t=1s


A lot of the tooling speed comes from a couple places:

1. The compiler optimizations being applied to code - Things like GADT's are able to be allocated very efficiently - Can generate code that doesn't box at runtime (smash it into a pointer is done as well) - Pointers are word-aligned

2. The OCaml runtime - No JIT - No warmup - Can emit native code

3. The HM type system - Typecheckers for HM are really simple [1]

[1] https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_sy...

This simplicity + the features of the compiler combine to make the OCaml compiler very fast, and very easy to write one (some undergrad CS classes do), and developer-time tenable to make both happen.


Integrating with 20 year old PLC's and robots is such a pain. Most people we work with have stacks upon stacks of abstraction to keep them safe and give them a sane wrapper for the functionality of the robots. So many labs have homebrew monstorous python scripts made by a grad student that left 2 years ago that everyone uses and no one can maintain.


Learning enough to make your own spin on things by mixing and matching parts is really easy once you understand the mathematical underpinnings of things.

It's true, many API-level things are opaque about how they work, but their theoretical foundations are usually possible to pick up.

I guess a good way to do this is via an example. I had to build a system to hit the following criteria:

* The data is scarce

* The data is time series

* The data follows a state machine

* The data is noisy, and contains a signal that's unique across all training samples

Gluing together multiple domains of knowledge, from generative tactics for label propagation, to EM for state-machine convolutional decoding, to 5 or 6 others gave the company something that worked, in a short timescale, that scaled well.

It's understanding where things glue together and under what circumstances to apply that glue has been what's been most helpful to me.

A great place to get an intuition of what I mean is Louppe Giles's PhD thesis on random forests, specifically stuff like the section on the bias-variance tradeoff.

https://arxiv.org/abs/1407.7502


Thank you. Will review.


Scala.js scratches that itch for me. I rarely write frontend code, but when I do the extra IDE support given to me by IDEA for Scala.js is pretty great.

I'm a full time Scala dev though, so I'm totally biased


It is kind of surreal to look at a source file and not immediately know if it's Scala or Scala.js (like, oh wait, this is the front end).

Just finished porting a legacy 5K Coffeescript front end over to Scala.js. Having a typed front end with all the power of Scala completely transforms the browser development experience. Great stuff.

Only major drawback is generated js binaries incur a 160KB "tax", primarily due to size of Scala collections. Though, FWIW, the front end weighs in at 232KB non-gzip'd, and was able to scrap jQuery + DataTables + Modal window and Validation plugin dependencies -- shaved off about 200KB compared to previous front end.


A while ago I had to do a complex ML task.

It involved tons of time series data that followed a state machine, with very little training data.

A useful algorithm to force a series of noisy predictions to follow a state machine is the Viterbi decoder.

Numba let me write a JITted version that got order of magnitude improvements, especially when there were over 10^8 time series points.

It's a great piece of software, if a bit finicky sometimes.


can you elaborate on the finicky part?


I've noticed two pain points: Installing outside of Anaconda can be a real chore, and error messages were extremely unhelpful (as of about 12-18 months ago, hopefully it's better now).


I work on non-Anaconda environments and this single pain point has caused me to stay away from it. I do some borderline code where I need the scientific stack and Django/flask/"weby libraries", so I could never pull "going full Anaconda" on the stack.


One of my professors, while an extremely competent physicist, chose to focus far more on the "philosophy of teaching". Sanjoy's metaheuristic approach to teaching, tooling it like a hard science with real analysis of the effect of differing methodology resulted in a fantastic class, called "The art of approximation". Turns out the 80/20 rule can be stretched in so many different ways!

This is the textbook, and it's one of the few books that's actually changed my life. https://ocw.mit.edu/resources/res-6-011-the-art-of-insight-i...


Oh cool. I'm actually in Olin now, and even better, taking his class as I type this post.

Art of approximation https://imgur.com/a/OvDzl

I'll pass along any cool questions.


I particularly like Andrej Karpathy's analysis: https://cs.stanford.edu/people/karpathy/hn_analysis.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: