Hacker Newsnew | past | comments | ask | show | jobs | submit | citrons's commentslogin

Also agree, with each release I get an "Eureka effect" that now I can solve the type issue I strugled couple months ago trying to create to just make some highly used function safer/easier to use for the developers.

Example the new satisfies and some upcomign "as const" features to generics I'm looking forward to


Actually liked that it lacked a minimap. Got a better sense of scale. After going trough most of the tunnels and getting stuck and kings chamber there, couldn't crawl back, I zomemed out and was shocked by the scale of the tunnels.


Thanks to you both for feedback on the minimap--this is one that we've had some requests for but I haven't started on yet.


I have so far had very good npm workspace experience. Just define "workspaces" property in package.json and your off. https://docs.npmjs.com/cli/v8/using-npm/workspaces

Right now only pain-point with npm is that "npm link" can't be forced to install peer dependencies so I'm unable to easily test typesscript built libraries within other projects.


I'm currently having a similar issue where the query planner refuses to use the indexes on a search query (was fine for w hile, but one day it just started de-optimizing itself). Instead just does a seq-scan. Instead of the execution taking ~40ms with indexes the query planner thinks that the seq scan of ~1.5s is better...

Re-indexes the db and run analyze the table. It gets better for max 30min then PG de-optimizes itself again.

I'm kinda stuck on it, any ideas what can I do to resolve it?


Try lowering the random_page_cost value; this is the performance cost query planner uses for random reads, which is usually too high if you're using an SSD where random reads are cheap (on disks it's expensive). Just setting it to 1 works well in my case.

This solves many "it does a slow seq scan even though there's an index"-cases.

https://postgresqlco.nf/doc/en/param/random_page_cost/

There are some other query planner knobs you can tune as well; the https://postgresqlco.nf site is pretty good.


If using SSD or similar fast storage subsystem, or those that hide a higher random access time vs sequential, you may indeed want to reduce random_page_cost to make random_page_cost / seq_page_cost in the 1.2-1.5 range.

But it's also wise to review the default_statistics_target being used, that autovacuum is running frequently enough (which does autoanalyze), that the analyze thresholds are also properly tuned...

Thank you for mentioning https://postgresqlco.nf Team member here :) All these parameters mentioned here are well documented there, with recommendations.

Also, have you tried the Tuning Guide? (https://postgresqlco.nf/tuning-guide)


Is it a HSTORE column with GIN index? The default "FASTUPDATE=ON" option will delay updates to the index until vacuum time, but if you don't vacuum soon enough suddenly it can decide it should sequentially scan instead of reading through the delayed updates.

This is behaviour I've seen on 9.x on the Aurora variant; for that the solution was to use the FASTUPDATE=OFF index storage option. You can see the delayed tuples by using "pgstatginindex" function.

Using some of the extra options of EXPLAIN (ANALYZE, BUFFERS, COSTS) might give more hints.

If not HSTORE/GIN, then it could be that the analyzer, after some auto-analyze of the table things that what you are asking for will match a significant number of the rows in the table. So there's no point in random seeking through an index because it thinks it needs to read e.g. 50% of the table anyway, so it might just as well not use the index.


set `enable_seqscan` = 'off' or set local `enable_seqscan` = 'off'. This will force the pg query planner to use indexes. Experiment with it until you figure out why your query performance deteriorates. Maybe you are doing a lot of updates/deletes? Increase the statistics sampling size? Autovacuum more frequently?


Could one disable statistics completely? Personally, I'd prefer to specify the execution plan manually.


Tune autovacuum analyze to run every 30mins. Seriously. The query planner needs up to date statistics.


Why does it need up to date statistics to decide not to change anything?

I mean, if you could freeze statistics entirely wouldn't that fix this problem?


Because the contents of the table is changing the statistics are becoming out of date.


That doesn't answer the question at all.

The old statistics said to use the index.

If it's still using old statistics, why does the behavior change?


Because the user is using values that are no longer covered by the statistics. For example incrementing timestamp or id column. If the stats are from yesterday and they say nothing about the frequency of todays timestamps the query will have to take a pessimistic view of the world. It might be that the data has radically changed since the last stats run, or not. Need to analyze the table to know and make optimal choices.


Yeah the setup-project boilerplaye stuff is a mess.

Luckily for backend it's super easy to set up typescript as the default are good enough for weekend projects/prototypes:

``` mkdir my-project cd my-project npm init -f npm i -D typescript ts-node @types/node npx tsc init touch index.ts npx ts-node index.ts ```


Stuff like 99% support flexbox and high % for grid makes working with layout so good.

Only wish "gap" property would be adopted easier to make working collapsing flex rows way easier.

Even the basic things like :last-child selector is good quality-of-life improvement.


For 15$ moreyou can get the deluxe edition with 2 exclusives cars and 5 soundtracks.


+1 for styled-components my prefered choice now over tailwind. Styled-components have good integration with typescript - no need for custom utility classes or inline css, also doesn't have a way writing an invalid css-classname and styled-component written css can be easily linted.


Incognito helps override this.


I took over a fairly large project which used tailwind v1. Have a low-hate relationship with it.

I like how fast you can do stuff, but when when a designer creates something a bit custom it all falls part really quick.

I understand it solve the issue with reusability and style-guides, but for modern apps that would also mean everything should already be react componentized, so you shoudl not really care about css at that point idealy.

I rather use a mix styled-components (easily extendable if you need something custom) and css-variables for dark mode and defining global variables like colors,paddings etc. Plus I route-split so page only has it's related styles.


Engineers of the world, push back on this stuff! Chances are your designer will want to work with you to make designs fit the tooling used to implement them. It doesn’t hurt to at least try it, ask questions. It does hurt to silently hold grudges and release what you know is suboptimal.


I'd say that tailwinds only works if the designer is on board or if you are not working on pixel perfect specs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: