Ravel Law is a new legal search, analytics, and visualization platform. Ravel enables lawyers to find, contextualize, and interpret information that turns legal data into legal insights. Ravel's array of powerful tools – which include data-driven, interactive visualizations and analytics – transforms how lawyers understand the law and prepare for litigation. In today's global and increasingly digital world, Ravel empowers attorneys to benefit from this huge influx of information and find value in it.
In 2012, Ravel spun out of Stanford University's Law School, Computer Science Department, and d.school, with the support of CodeX (Stanford's Center for Legal Informatics).
We're looking for an experienced front-end engineer. We build our front-end in Ember using Ember-CLI, ES6 transpilers, ember-data, and http-mocks for rapid and modern development. Ideal candidates will be skilled in highly dynamic web interface development (HTML, JavaScript, AJAX, jQuery). In addition, candidates should have a passion for engineering unique interactive visualizations with d3.js and potentially canvas/webGL. A flexible front-end engineer with 3-5 years JavaScript experience will excel in this role, but 2+ years professional Ember experience will also distinguish leading candidates.
At Ravel, we develop the legal profession’s most innovative products for data analysis, visualization, and research - uncovering insights about judges’ rulings, revealing critical cases, enabling lawyers to make data-driven decisions, and more.
Ravel was launched from Stanford University’s Law School, Computer Science Department, and d.school, with the support of CodeX (Stanford's Center for Legal Informatics). We have been featured in Wired, The New York Times, the American Bar Association Journal, and our founder is a Forbes 30 under 30 for 2015.
We are a rapidly growing Series A startup funded by top tier investors like NEA. We offer competitive compensation, equity, and health care. Our culture is extremely dog- and human-friendly. Our office headquarters are in San Francisco, South of Market - conveniently located between BART and CalTrain.
We're looking for Data Engineers (Scala, Spark, SQL) and Data Scientists (Spark, H20, Stanford NLP). Check out the full descriptions and apply at https://jobs.lever.co/ravel.
At Ravel, we develop the legal profession’s most innovative products for data analysis, visualization, and research - uncovering insights about judges’ rulings, revealing critical cases, enabling lawyers to make data-driven decisions, and more.
Ravel was launched from Stanford University’s Law School, Computer Science Department, and d.school, with the support of CodeX (Stanford's Center for Legal Informatics). We have been featured in Wired, The New York Times, the American Bar Association Journal, and our founder is a Forbes 30 under 30 for 2015.
We are a rapidly growing Series A startup funded by top tier investors like NEA. We offer competitive compensation, equity, and health care. Our culture is extremely dog- and human-friendly. Our office headquarters are in San Francisco, South of Market - conveniently located between BART and CalTrain.
We're looking for Front-End Engineers (jQuery, Ember, D3), Full-Stack Engineers (Scala, JS, Mongo), and Data Scientists (Spark, H20, Stanford NLP).
Check out the full descriptions and apply at https://jobs.lever.co/ravel.
Can I ask, do you really need Hadoop and "big data" for this? There have gotta be substantially fewer than 10k courts in the United States. What needs processed that SQL can't accommodate?
Meta-note: It may be wise to make a rule on what's appropriate to leave as a comment on these hiring posts. I can see some companies shying away if they feel like it's going to turn into a "critique my stack and/or hiring process" thing.
We're processing the opinions rather than the courts, so we're dealing with millions of documents. Since we're building a network of their citations, it winds up being way too much data to hold in memory on a single node, hence the need for Spark.
At Ravel, we develop the legal profession’s most innovative products for data analysis, visualization, and research - uncovering insights about judges’ rulings, revealing critical cases, enabling lawyers to make data-driven decisions, and more.
Ravel was launched from Stanford University’s Law School, Computer Science Department, and d.school, with the support of CodeX (Stanford's Center for Legal Informatics). We have been featured in Wired, The New York Times, the American Bar Association Journal, and our founder is a Forbes 30 under 30 for 2015.
We are a rapidly growing Series A startup funded by top tier investors like NEA. We offer competitive compensation, equity, and health care. Our culture is extremely dog- and human-friendly. Our office headquarters are in San Francisco, South of Market - conveniently located between BART and CalTrain.
We're looking for Front-End Engineers (jQuery, Ember, D3), Full-Stack Engineers (Scala, JS, Mongo), and Data Scientists (Spark, H20, Stanford NLP).
The author of this article seems to be confusing "changing the world" with helping people. UNICEF and similar organizations are great at helping people, but they really aren't meant to change the world. They just make the existing one a little less terrible for those who are worst off. I can't imagine a compelling argument saying that digging wells or tree planting has changed the world more that Facebook or Google over the last 15 years. Whether those changes have been for the best is open to discussion, but the fact that many startups have in fact changed the world in significant ways is hard to dispute.
No, I think I quite understand their world changing impact, but those are two examples of the very few companies that have reached the sort of scale where they actually change the world. They represent the vast minority of successful startups, which are a vast minority of all startups.
In any case, what is changing the world if not helping people? Or, as I phrased it in the post, improving the circumstance of mankind?
There are plenty of ways to do it, and it need not be charity (marine biology or journalism, the other examples from the post, are not charity) and it need not be altogether altruistic in purpose, but to change the world in a desirable way necessarily means helping people. For example, Google helps me find information. Facebook helps me stay connected with friends and family.
Except that the 40 really isn't a very good measure - if it were, the Oakland Raiders would be winning the Superbowl every year instead of missing the playoffs. Of the 15 fastest players in the last 12 years (http://en.wikipedia.org/wiki/40-yard_dash), only 2 are stars, and most are bench guys or out of the league. The 40 has its uses, but it needs to be taken as one fairly small factor in evaluating a player, and even then it's only really relevant for certain positions.
Similarly, coding algorithms on whiteboards doesn't tell you much other than how good the candidate is at coding algorithms on whiteboards. Given that the vast majority of their revenue comes from either search (which was built over a decade ago) or companies they've acquired, and the numerous well-hyped failures (Buzz, Wave, +, etc.), I don't think it's actually working all that well for Google either.
I'm not saying it's a perfect indicator, I'm saying it's a cheap and easy indicator. Certainly there are outliers in every direction, but a running back who runs a 5.0 forty just isn't going to be very good (unless he's like 400 pounds).
So is it more likely that Google is filled with idiots that can't see the err of their ways, or that, across tens of thousands of individual hires, this is the most cost effective and produces the best results (minimizing false positives) on average.
College-admissions style interviewing just doesn't make sense for a company like Google.
FWIW, there are a couple edge cases (like javascript/frontend engineers) which my company (Facebook) has completely separate hiring tracks for. It works for us, but I'm sure Google has its own reasons for not doing that.
Actually, I'd say that whiteboard interviews are for Google are an arbitrary way of selecting from already qualified candidates, just as 40 times might aid a team in choosing between 2 otherwise similar players.
The players invited to the combine are the ones teams are considering drafting anyway; all the 40 times do is move players up or down the list by generally small amounts. The point isn't that 40 times are useless, it's that they provide very little additional information about a player. Champ Bailey was going to be a high draft pick no matter what he did at the combine, and everyone already knew that Trindon Holliday was fast but probably too small to succeed in the NFL.
Likewise, someone with a 3.9 from MIT or a bunch of good open source work who's coming for an in-person interview is already qualified, and the whiteboard doesn't tell you anything new. I'd guess Google sticks with them for the same reasons teams tout 40 times - it's good marketing both internally (making decisions seem less arbitrary) and externally (look how tough our interviews are is a more socially acceptable way of saying look how smart we are), and it allows people to deflect blame if a hire doesn't work out. Judging by the number of posts about Google interviews I see here and elsewhere, the marketing is certainly successful.
The vast majority of interviews do not result in a hire. (Some of my coworkers have reported giving 20 in a row without a single offer.)
Also, I think your view of the interview pool is somewhat skewed. Most of the candidates I see do not have 3.9s from MIT (BTW, I believe MIT has a 5-point GPA, so it really would be 4.9), and a lot didn't go to Ivy-League universities.
Google has separate tracks for FE SWEs as well. It's just that you also need a basic proficiency with C++/Java and algorithms to be a Google FE SWE, while I'm not sure that Facebook requires that? (Perhaps because Facebook's frontends are written in PHP instead of C++/Java.)
FWIW, Search is continually being rewritten, and the bulk of the current codebase was written in the past 2 years or so.
I'm kinda curious what it'd look like if you took the 2002 version of Google and used it on today's Internet. My guess is it would feel incredibly dated and virtually useless because of spam. We have a couple archived UX studies that were done with the old (pre-2010) UI; I remember that when we launched everybody said "Eww, I hate the new UI. Why change a good thing, Google?" and now when they look at the old UI they're like "Omigod, I can't believe I ever managed to look at that. It's like something straight out of 1998."
Looking at that list... I wouldn't count the three from 2010/2011 because it's too early to tell, so that leaves 12 players. Of those 12, I would say there are 2 superstars (Bailey, Johnson), 2 great players (Rodgers-Cromartie, Routt), a #1 wideout and potential emerging star (Heyward-Bey), and 3 players with promising early careers that were derailed by injury/death (Mathis, Washington, Williams).
That's a pretty fantastic hit rate, especially given how rare it is for any draft pick to work out. Similarly, if Google tries a bunch of projects of which only 10% are expected to work, but 20% of them end up working, then they still did a great job even though 80% of their projects are failures.
It would be cool if there were some way to exclude cities within a certain radius. It seems that most searches (inbound and out) are for nearby cities, and the fact that a lot of people move between Boston and Cambridge isn't particularly informative.