I'll try that one. Last time was frantic and tried like 5 bad sites before finding at least where the plane was.
But maybe the passengers or crew on that plane had a better idea of why they were delayed, which I'm not sure any site would show. I've been on the other end too, knowing my plane isn't taking off due to a jammed door and texting updates to someone who was supposed to pick me up on the other end.
I asked for a refund and they offered 7 months of Spotify. I said no I need a replacement for the screen then they said the refund will be sent to the card I pay for premium in the next 14 days.
I am not into classical music, but I think there are similarities for fans of jam bands especially the grateful dead. Ive never heard a group so passionate about specific versions of song depending on the date/location.
I feel UI designers could make something work but its such a niche group of people its hard to spend resources designing and maintaining a catalog of recordings of different versions of the same song.
Come to think of it, Phish or Grateful Dead live shows have some overlap, but that's more about recordings on different dates by the same artist, while classical adds different artists, or sometimes the same artist on different dates, or sometimes the same orchestra but a different conductor, or...
But yes, that's the closest I can think of, but I think it's a smaller niche even than classical music.
Are you mad about the content or the way it was shown (interfering)? I get these things every couple weeks it's usually pretty targeted/relevant. If it's not incredibly relevant to me it's usually culturally pretty important.
Personally I think spotify is the best content provider to give me good recommendations. Compared to netflix, youtube and other media source they dont show the same 10 items mixed into other topics.
The unfortunate thing about the insane amount of tests that spotify runs to make their platform more engaging is some people will have bad experiences.
I don't mind recommendations but the popup crosses a line. It's a first-party ad; no doubt Spotify get paid to promote certain music. If you're paying for an ad free experience, I can understand why it'd be frustrating.
Even if the popup is extremely relevant to me it will have two problems:
1. if I am opening the app I probably have a destination in mind, and a surprise popup isnt what I was thinking about.
2. After I decline the popup whatever content it was trying to hawk at me, no matter how appealing, is now lost to the aether. Who got value from that? Not me, not spotify, not the content owner.
Discover Weekly and Release Radar are really good, so good that I cant seem to stay with any competitor for very long. I'm not sure why popups are seen as a good idea by anyone. Why cant it just put he promo content in those? They know I use them.
It's kinda like how any software eventually tries to emulate email. Every marketing department eventually turns into a spammer, no matter how well intentioned they were in the beginning.
All entertainment seems to be broken. Let me watch something on netflix my recommended list is almost identical to comedy, adventure, etc. Their machine learning must show that people will just watch what is in front of them.
Facebook served me an ad a couple of months back urging me to apply for a job at the hospital I've worked for since 2015, WHILE I WAS ACCESSING FACEBOOK FROM MY WORK VPN. I have it listed as my workplace and I was coming from a IP addy owned by it and still the computer thought it should show me the ad. I just don't get it.
Spotify transitions to a “song like those in your playlist” when a playlist finishes…and 95% of those songs are from the playlist I just finished listening to…makes no sense.
Trillions of decisions have been made off a tool that has poor error handling and data consistency issues. Blame is not entirely on either the user or the software, but the tool is too trusted without validation.
Programmable commands instead of a data grid would be huge improvement to quality but people use excel in many ways. Python is out of reach for most people. SQL would be an improvement as well. I assumed airtable or similar would replace excel over time. But the sunk cost for existing report and the sharablity seems to keep excel in control.
Airtable really ought to be killing Excel, but the SaaS model combined with a stupidly low artificial row count limit (over 50000 rows is listed as "contact us for pricing") means that it will never achieve penetration into weird and wonderful use cases like Excel has.
Like, my default is to throw a dataset I'm hacking on into an SQL database so I can actually query the thing. But no I don't want to upload my 400MB log file. I'll just use grep, or build a CSV and deal with Excel filtering.
Airtable should be awesome at reducing the cost of database-ifying these random datasets to zero. But the sales constraints put it in this niche where it's not the default tool of choice.
You might want to try out Baserow (https://baserow.io). It’s an open source alternative to Airtable, backed by a PostgreSQL database. Main differences are that you can self host it with unlimited rows, it’s modular and it’s made to handle high volumes of data.
I found what Airtable is doing to be deeply attractive. But their costs and their lock in and their pricing model and it's just...
UGH.
Microsoft Access was a good idea with a terrible implementation.
There HAS to be a unfilled niche here.
nocodb looks to be the best answer so far? Because it ties to a backend postgres database, it can be used along side bespoke applications. It still needs development though. I'm watching it like a hawk.
We are trying to answer this with CloudTables (https://cloudtables.com) - which is effectively a GUI for my DataTables library with a Postgres backend. Current work is to address the row limit and allow millions of rows without needing to contact "sales" (me), while also not charging per user (I hate that as a customer). If anyone fancies giving it a go and dropping me some feedback that would be most welcome! There are some rough edges without question that are still being worked on, but I think it has some advantages such as being able to self host with your own Postgres instance.
Access was pretty incredible for what it is/was. I could build a structured database with a nice UI for non-data people, reports, and even more advanced things like automated emails, exports, etc., in 1/10th or even 1/20th the time it'd take to build something similar as a web app.
We had an Access database that managed grant funding for an entire public University and in many ways it worked a lot better than the SaaS app that recently replaced it. Need to collect a new set of data? No problem, give me 4 hours and it'll be ready to use :P.
I'd love to have something like Access but that worked very well as a platform-agnostic web app and could easily integrate with cloud infrastructure.
You used to be able to do that with Access 2010 web databases. Of course, Microsoft has deprecated that in favor of Powerapps and Microsoft Dataverse, but it's not clear that actually lets you join an Access database to a low-code frontend. (It should, but there's a lot of marketing speak that I don't quite understand.)
Access wasn’t even that bad of an implementation. It was amazing not just how broadly access was used but the kinds of users who could do real things with it. A bit like HyperCard.
Access lacks, IMO, better internal programming and more exposure to the fact that you can use pretty much any database you can access (pun intended) with ODBC or ADO.
Make it easily deliverable over network, and you have killer product.
I also think it’s just kinda clunky compared to excel or Google sheets. Maybe if you get used to it it’s ok to work with, but I guess you run into the issue that any friction makes it a hard sell to those who are used to excel.
What do you mean by plain English? Will it support colloquialisms? Regional dialects? How is a Left Join expressed in English, distinct from a Full Join? Will it accept synonyms and contractions? Or will the Query Language require Structure?
Yeah their monetization strategy is extremely puzzling. As a casual user I loved their Chrome extension that lets me grab data and put it into a sheet in a click but it only lasted as long as my Pro membership. All of the advanced features seem to be locked behind a subscription.
apparently grist can handle 100,000 rows, and that's just a soft limit so you might be able to do more.
being able to have an excel grid and a chart view on the same page would probably suit your use case as well. being able to use python for the formulas is a nice touch too.
the free hosted version has almost all features available as far as i remember. there's also a docker version that's easy to get up and running and doesn't have any limitations
> Programmable commands instead of a data grid would be huge improvement to quality...
Would it? At the end of the day, someone else still has to proofread and QA the commands/formulas/program or it's just blind trust that the decision is being made on. Trust (or ignorance) that the creator knew what they were doing and developed it in an accurate way before action is taken on the decision being made. The interface really makes no difference, it's the human component and "process" for creation that needs to be fine tuned. Things like the London whale situation was a process failure where one person had too much power to execute trades without oversight, review, QA, testing, etc. [0] All things that are pretty standard in a software developer's day-to-day but the rest of the world has not realized or adjusted to the fact that they are now software developers too.
[0] Excel wasn't the problem with the London whale at all, they made a mathematical error "modelers divided by a sum instead of an average"
The Reinhart-Rogoff issue was technically an error in Excel, but also an error by the authors for not actually verifying the results before publishing. It didn't hurt that their particular biases were in line with the results.
The technical problem can be addressed with more warnings and safeguards, but they are meaningless if no one uses them.
I hadn't previously read up on the RR issue. But after some surface level research, I would not say it was Excel as an issue. It sounds like the tool did exactly what they programmed it to. It seems like human error or choices they made to arrive at the conclusion they wanted; which seems to be speculated (or true, I only scratched the surface).
> While using RR’s working spreadsheet, we identified coding
errors, selective exclusion of available data, and unconventional weighting of summary statistics. [0]
I'm not a fan of tools giving warnings for these types of "coding errors". Although a warning I can think would be nice is where math just doesn't work as expected. The recent floating point discussion [1] seems appropriate as it's just not very intuitive and as a programmer you need a pretty deep level of understanding to know that the resulting math is likely not accurate. But, it also seems to effect nearly every programming language and is not a quirk of one specific thing.
I'd be interested to read more if you have info outline the actual error within Excel. If there is some 2+2=5 situation, I'd be interested to learn about that. I feel like every time someone says "Excel error", it's actually "human error". It would be like if every car accident was a vehicle malfunction but we all know it's most likely an operator issue.
I think what you're touching on is complexity and how many people tend to trust complexity because it's too difficult to validate and you must know what you're doing if you built something so complex.
Even in a corporate environment, I use spreadsheets to support big decisions every day, the format is as you say universal and easily interpretable, but many people never go that far. They trust that I did it correctly and take the "output" as truth. If I screw up big ($), it might cost me my job, but is it really my fault if nobody else even bothered to check my work?
It's not our fault, our brains are wired to find quick & dirty; simple not small is beautiful. But people also contemptuous of what they understand, and accept at face value what they don't. Michael Crichton, much disparaged in his later years for contentious political stances, had much of value to say in his Gell-Mann Amnesia Effects' "wet sidewalks cause rain" critique of how we parse information.
Still, I'm glad it's transparent in this case and thus exposable. How much more is hidden, guarded jealously even, as though methodology was trade secret instead of, you know, the underpinnings of proposed outcome?
I think that more precision over application of formulas would solve a lot. Arrays are mapped over by copying the code for each array element by dragging it across a row of cells, and the arguments to the formula are automatically mutated based on where the code is dragged to. This can be error prone.
More concrete definitions of where a formula should apply would be good, for example, leave the formula cell in one place and specify that one argument should come from this range of cells and the other from this range of cells, and the output should be mapped to this range.
I don't feel that Python is out of reach for most who are using Excel with vlookups. I do feel that most Pandas code is poorly written and thus not at all compelling to replace Excel.
(My background is that I teach Python and Data Science to large corps.)
+1, the Pandas API is somewhere between mediocre and bad, and results in garbage code unless you use it in a carefully constrained way (which is admittedly true of many complete languages, much less libraries that organically evolved several tooling generations ago)
I'm sorry, but this is just wrong. Natural gas prices are higher than I've ever seen them, only comparable to one very cold winter year. This is not just a return to pre-Covid prices, this is due to government policy. Same with crude oil prices. Just look at the five year chart for NG1:
This is because of the shale revolution, so you can't go back that far. They're different eras. As for 2014, it was because of the polar vortex. The current high prices are in the summer.
Five years is not the only thing -- it's also summer prices, whereas the other peaks were during very cold winters. Natural gas is very very seasonal, so winter gas and summer gas are different commodities almost.
Production is higher than ever before. Anyone can write any article that they wish. The only public data available right now from the EIA is a little lagged, but keep an eye on the September number when it is available.
As you say, much of the low gas prices that have occurred in recent memory is due to fracking. Those of us who are a bit older remember when natural gas used to be much higher than we're currently seeing, even before adjusting for inflation.
The energy companies used to also ramp up production in areas like the Permian Basin as soon as oil/gas reached certain thresholds. They are generally being much more disciplined right now because of uncertainty about the pandemic. From that perspective, the prices are more indicative of the "post-pandemic" economic recovery than inflation. If there was less uncertainty about the pandemic recovery, they would be more likely to invest in additional drilling and the price would come down from current levels. They just don't want to be caught ramping up production while seeing a simultaneous pandemic-driven downturn in demand.
Just look at the personal savings rate. It shot up from less than 10% to over 30% and now back down. This was the stimulus checks which people pocketed and used to buy financial assets. That's another form of inflation but the "good" kind that us rich capital holders like. Eventually this money will re-enter the market and push up prices for every day goods.
Also official inflation rate isn't based on a fixed basket of goods, but changes. For instance, if chicken becomes more expensive, they weigh beef more. But pretty much anyone that's been to a restaurant in the last 6 months can tell you that prices are up a lot more than 5%. I guess just eat out less, right?
That's fine, but I don't want my inflation measure to make those decisions for me because its obviously gamed. Just tell me how much stuff costs relative to the past.
In the short-term these adjustments feel wrong. But at what point does a no-adjustment inflation measure stop measuring inflation?
Buggy whips and hats are not in current inflation measures. Long ago, computers and airline tickets weren't in inflation measures. When did "eating at restaurants" become enough of a thing that it became part of inflation measurement? Restaurants have existed a really long time, no? Ancient Romans had "fast food".
Zoom out and think about it. Entire product categories come and go from our daily lives. Is there any possible method for understanding "how much stuff costs relative to the past" that doesn't look janky in the short run? This is a hard problem.
FYI - there are 100 or more different CPI series that are published. The news reports on a single one - the one that applies to the largest number of people in the US. You may find that a different one is more applicable to your particular circumstance.
Just wanted to add, if all else is held equal amount those factors, growth in gdp, without and subsequent changes in the money supply or velocity, would lead to deflation.
Economists would argue that the 'printed' money is actually reserves at the central bank, and thus has no impact on price levels. I myself am less certain about the overall dynamics.