Hacker Newsnew | past | comments | ask | show | jobs | submit | rickard's commentslogin

As another commenter noted, I don’t trust NYT’s lawyers with my chats any less than OpenAI, but spreading private data should be limited as far as possible.

I just cancelled my NYT subscription because of their actions, detailing the reason for doing so. It’s a very small action, but the best I can do right now.


> spreading private data should be limited as far as possible.

The NYT making the logs public is extremely unlikely.


...especially since no NYT employees will even see them. They will only be seen by outside lawyers the Times contracts with to handle the litigation, and maybe some outside experts those lawyers hire.

The people who do see them won't even describe anything they see in them to Times employees. Times employees won't see much more than what you or I will be able to see by reading public court filings and attending hearings and the trial (if it gets that far) in person.

Courts have been dealing with highly confidential and sensitive information in discovery for decades (the present system used in federal litigation has been around since 1938) and take care to limit access.


I agree that it’s not zero, but according to CDC, the US sees about 1.35 million cases per year in a population of about 346 million, which is about 390 cases per 100,000 people. Your figure for the EU over a population of 447 million in 2022 gives 14.5 cases per 100,000 people, or more than a factor of 26 less.

Being 26 times less worried about something translates, at least for most things, for me, to not being worried about it any more.



I think that it’s pretty relevant that this is from this year’s Ig Nobel prize winner in literature: https://news.mit.edu/2022/mit-cognitive-scientists-win-ig-no...


Not DALL-E 2, but Disco Diffusion (https://colab.research.google.com/github/alembics/disco-diff...):

* https://jossi.avkrok.net/Escher_0.png

* https://jossi.avkrok.net/Escher_1.png

* https://jossi.avkrok.net/Escher_2.png

The prompt was "A stained glass image of M.C. Escher working in his studio in non-Euclidean space", 600 iterations, 1024x768, and these are the first three ones it came up with. It's not perfect, but it definitely knows who he was!


Nice! Although I somehow expected there to be more stairs :-)


Working link, open access, with link to pdf: https://epjc.epj.org/articles/epjc/abs/2021/07/10052_2021_Ar...


It’s like they need a socialist revolution in China. Workers of the world, unite!


I mean, yeah, but unironically.

China is state capitalist, not communist, despite having a "Communist Party". It's governing ideology is Dengism, which replaced Maoism (which itself in practice was state capitalist just like Leninism was in the USSR) and defines "special economic zones" which are explicitly capitalist and are ideologically justified as being temporary, promising that the capitalists they create will be disowned by the people when the time has come (whenever that is).

Ending capitalism not only means "getting rid of" the capitalists (which state capitalism does by substituting them with the state) but getting rid of the capitalist mode of production, i.e. the employer-employee or capitalist-worker dichotomy. The easiest way to understand this when coming from a capitalist realist mindset (which is probably most people on HN) is in the form of worker cooperatives, where workers either decide how the business is run democratically or delegate decisions to a representative (and being able to revoke this delegation at any moment if their interests are not reflected properly). There's no single "owner" external to the "workers", the workers control the company.

State capitalism was intended as a "transitional state" to enable a communist revolution. The USSR justified it by stating that a communist revolution would require a level of industrial development Tsarist Russia didn't have (being a primarily agrarian society) and then claiming that a communist revolution would have to be global in order to succeed (which is a great justification for imperialist expansions and authoritarian rule to "maintain order" until then). Some people would say that if you contrast Lenin's writings with the politics he supported directly following the Soviet revolution, he was more of an opportunist seeking political power than an actual ideologue.

Mao was similarly focused on "creating the conditions" to enable communism at some point in the future rather than actually promoting radical democracy and equality directly, although his death count was mostly the result of authoritarian bureaucracy going wrong and naïve optimism about foreign technological accomplishments, which arguably was a problem he inherited rather than created.

That said, I don't think revolutions work, certainly not at scale and not if you want to create a more egalitarian society rather than another autocracy. The dynamics of revolutions require a small group exerting power on a larger system, which is in itself antithetical to the idea of how a communist society would be structured (hint: basically the opposite way). A lot of socialist revolutions during the Cold War were derailed by either side trying to turn them into a proxy war resulting either in violent defeat or a red authoritarian client state of either the USSR or China.

It's probably a better strategy to strengthen communal bonds by creating parallel local structures that can take over when capitalism fails. Worker cooperatives would help with this as they tend to contribute to local communities rather than merely exploit them as resources. (Small) unions can also be an important tool as they can force capitalist corporations to act more like cooperatives by enabling collective bargaining against the owners.

Oh, sorry for the lengthy response. You were probably intending this as an absurdist quip, not a genuine suggestion.


I recently (last week) started using numba, for similar reasons to why the author seems to like Julia. I tested translating his example to numba:

  @numba.njit(parallel=True, fastmath=True)
  def w(M, a):
      n = len(a)
      for i in numba.prange(n):
          for j in range(n):
              M[i,j] = np.exp(1j*k * np.sqrt(a[i]**2 + a[j]**2))
and timed it like this:

  %%timeit
  n = len(a)
  M = np.zeros((n,n), dtype=complex)
  w(M, a)
On my 8-core system, this ends up more than 10x as fast as the numpy version he listed (which seems to lack the sqrt, though), which would place it close to the multithreaded Julia, even considering that ran it on a 4-core system. As an added bonus, it can also pretty much automatically translate to GPU as well.


the raising of numba shows us why numpy and "just write vectorize-styled code with a C++ backend" is not enough.

yet Numba basically makes your python code not python. It doesn't support so many things: pandas dataframe, or even as simple as a dict(), which means you often have to manually feed your numba function separate arguments.

To separate a complicated calculation into numba-infer-able parts and the not ones is not fun and sometimes just impossible.


Yep, completely agree. For the project I'm currently doing, it seems like a fairly good fit though. Lots of prototyping different approximations, and needs to be faster than plain numpy.

Also, the jitclass things help somewhat. I use them as plain data containers, to work around the hideously long argument lists that otherwise would be required, but with no methods. jitclass breaks the GPU option, though.


I have personally experimented quite a lot with numba. When it works, it's great. However, numba can have very cryptic error messages making it difficult to debug.

Which is why I switched to dask, which even though slower integrates better with numpy.


Actually, Numba does support dicts now.(You can't have a mix of types in the dicts unless that's changed, but that isn't an actual problem for most ML work.) I have used numba very effectively to make my machine learning research projects run very quickly. I don't use pandas; I do use a lot of numpy and scipy. I understand that pandas can use numpy arrays for at least some things. Since numba works great with numpy, it seems like that might be an approach for using it with pandas, in at least some cases.


The number of people in intensive care is most certainly not growing exponentially. Please see the official statistics on https://portal.icuregswe.org/siri/report/vtfstart-corona . The number of patients admitted to intensive care daily has gone from 43 on the 23rd of March to 10 today.

The cumulative number of patients admitted to intensive care is 278, and since we had 510 intensive care before the start of the pandemic, that is not a problem at all.

Unless you have reason to believe that the ratio of patients requiring intensive care somehow has decreased radically during the course of the pandemic, the growth of cases is most certainly not exponential.


There is lag in the reporting I think. The total number of patients increased by 39 from yesterday, but those were probably assigned to multiple dates in the past. Therefore it will always look like the last few days are declining.


I don't quite agree with your complaints.

Regarding #2, what do you consider the problem with representing arbitrary precision decimals as numbers? That your javascript json parser converts json numbers to 64-bit floats? I'm not sure that that is really a problem with FHIR - see e.g. https://www.npmjs.com/package/json-bigint . Or is the problem that exponents aren't allowed?

About #3, how would you standardise addresses? ISO has been at it for several years and still not produced a standard. See e.g. http://stackoverflow.com/questions/4840928/iso-standard-stre... for relevant links.

Your fourth complaint is plain invalid. A period consists of two dateTimes, both of which specify time zones.

Number one and five I can somewhat sympathise with, though.


So, specifying dateTimes (as defined here: https://www.hl7.org/fhir/datatypes.html#dateTime) is not sufficient. Remember, it isn't including the timezone (no code or anything), it's including the offset. The difference there is the usual conflation of time and timezone and location...basically, if you don't have the actual physical and political location, the offset is of only academic interest and you might as well just display UTC.

On 3, the complaint was specifically caused by this passage:

"However, this is frequently not possible due to legacy data and/or clerical practices when recording contact details. For this reason, phone, fax, page and email addresses are not handled as formal URLs. For other kinds of contacts, the system is "other" and the value SHOULD be a URL so that its use can be determined automatically."

Addresses (post, in your case) are arguably best handled as dumb strings, of type "garbage_human_address". The problem I have is with the explicit admission of "well, support the legacy", when everyone knows that the legacy stuff needs to go. We've been killing ourselves by compromising and allowing the mistakes of the past corrupt the systems of the future in healthcare.

As for #2: if there is ever any question about the representation of numbers, especially in JSON, you store them as strings. Twitter ran into this with ids, other people have too, and it's just generally icky. If the Javascript/JSON representation of numbers is being used (and it shouldn't, because the numeric types in JS are kinda broken), then there shouldn't be the restriction on exponential notation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: