I turned on the AI features last week. Last night, Carplay noted I had an incoming text message. Instead of reading me the whole message, it summarized (poorly) the message, and then asked if I wanted to hear the entire message. I replied "yes", but it did nothing. A few minutes later, another message was received and the same thing happened.
There is no way this feature saw any acceptance testing prior to release.
I wear a Motiv fitness monitor ring and it tracks sleep very accurately. I've noticed over the past year wearing it that the nights that leave me feeling best in the morning usually have a period of 45 minutes to an hour that log as awake time -- usually about half way through my sleep session. Oddly, I rarely recall being awake during this time.
Was there evidence presented that the Las Vegas shooter was motivated by hatred for some group (political or otherwise)? I guess I missed that in the papers.
It's not practiced in ND out anywhere else. The export market won't accept it and it's not deliverable against spring wheat futures so it's not marketable and never has been.
Scanning the article, it appears that the author is using two years of data in order to reach conclusions and this is because only two years of data was available to the author. If these same criteria were measured with two years worth of data ending on December 31, 2008, the results would look quite a bit different.
Unfortunately, the website I used to scrape the data only had 2 years of data. But each datapoint is looking at each individual analyst rating and price target for a stock and then comparing it to the price in 1 year. However, you are correct that datapoints during a downturn would be quite different.
Founder of MarketBeat (website mentioned) here. We do have more than two years of data (more like 7), but profile pages get too large when we publish it all in one shot. We can make the full history available as a giant CSV file if there's a researcher that wants to do a similar analysis over a long period of time.
Hey, I'm no longer affiliated with any site. If you're looking for data - I'd compare the TipRanks data set as well. I've found it to be a lot bigger than MarketBeat's and fairly more accurate. Data collection is automated through natural language processing and cross-validated with machine learning techniques that guarantees a much higher accuracy.
I'm not sure who you should reach out to there - but I think support [at] tipranks.com should work.
Things might have changed since I parted ways with TipRanks though - but you probably want both data sets for comparison.
I spent several years in securities and derivatives trading and the most frequently cited reason for avoiding open-source software I heard was that, in the event of a major foul up, there was no one to sue if you got sued yourself. It's not that difficult for an attorney to paint you as reckless for using "free" software.
I spent 13 years writing the core trading system for many of the well known exchanges. We used open source wherever possible because the software tended to be more reliable. That said, clients usually got to request the database and we used Sybase a lot. I have been using Postgres for the last eight years since. every day of the week and I really like it but the planner is quite a bit worse than Oracle, SQL Server's. The postgres planner is still way way better than MySQL's. It still has correlated subqueries explode into cartesian joins. Mysql is great as a data store but it's more of a replacement for noSQL than an advanced query engine.
MySQL's planner is predictably stupid; structure complex multi-table predicates as joins (nested if necessary) rather than subqueries and it's almost imperative. Postgres OTOH is very unpredictable; sometimes it does the right thing, and sometimes it does something amazingly asinine, where simply swapping a table between from vs join clause can result in 1000x speedup.
Specifically, I've seen pg take a query that looks like this:
select ... from a join b join c join (select ...) d
where a has millions of rows and is an equijoin with d where d has 10 rows, and it decides to materialize a x b x c, only joining in d at the last step. But do it like this:
select ... from (select ...) d join a join b join c
and it does the right thing! And analyze gets it right (i.e. the plan for the reordered joins is recognized as better) - never mind genetic optimization, it's lacking analytic optimization.
With the lack of hints, almost the only tool you have to control query plans effectively in postgres is parenthesized joins. Since it's more liable to rewrite the query, the language ends up being less imperative, and thus less predictable. And I like predictability in production.
SQL-level feature set is no comparison of course, pg wins easily.
There are settings for choosing between the exhaustive search planner and the genetic planner. The exhaustive planner is better, but can be slow for complex queries with a lot of paths. But, if your query is at all time consuming you probably want to increase geqo_threshold and geqo_effort as well as join_collapse_limit and from_collapse_limit.
I'd also suggest disabling nest_loop_entirely if you are having problems with bad cardinality estimates resulting in nestloop plans that run 100 times when the planner estimated once.
Show me an enterprise DB license that offers you better indemnity/liability options than open source. (I negotiated them on behalf of huge clients for many years.)
It's not so much the license per se. It's that the setup is done by a third party that can be blamed when something goes wrong. I saw one one contract that was specifically saying that they are insured for 1 mio. in damages.
The company for the longest time wouldn't even touch basic firewall rules without having the firewall contractor implement it.
In my experience with couple of banks, it comes down to support. Lot of systems in banks are written for longevity. So they frown upon software which might be obsolete or people stop working on them 5 years down the line. A paid software, they reason, can at most release a new version while free might not provide enough incentive for people to work on it continuously.
Many also think looking up issues on stackoverflow, google or blogs as unreliable. Then there are times when issues might be specific to installs or data, in which case sharing the logs/sample data (even masked ones) can be risky. They feel comfortable sharing logs/masked data with for example Oracle because they believe it to be safe and locked under Oracle's security guidelines.
The 2nd refrain I hear is - security. In case of a major security issue being revealed, there is a general sense that FOSS will be slower to react in releasing a
"stable" patch. Comparatively paid software take it as a reputation risk and work towards quickly releasing a "stable" patch.
If people have to use FOSS, then they try and search for the paid support flavor. Recently we were looking at MQ software. When we zeroed in on RabbitMQ we were asked to deploy only the paid Pivotal version and not the free version because "support".
Sure, these things might not be completely true but for many higher ups paying for something somehow makes them sleep better at night than a "free" alternative.
This is wrong on so many levels, I suppose you mostly understand it, but here are the counter arguments:
> Written for longevity
OSS is much better at longevity than proprietary. Even if the authors all die without will, it is possible to fix the little bugs that prevent you from using the software on [NewTechnologyHere]. I've done it countless times with Java software; If anything OSS is the guarantee that you own your future and that the system will exist in the legacy.
> Use paid flavor
It's good, but what's better is joining the golf club of a principal maintainer. He's key in paying him to fix the issue you're having quickly and merging upstream.
I think there is more to this than that, though. Software companies usually run R&D at about 10% of revenue. So, these support contracts are really returning only 10% of their cost. And the companies who are buying the support contracts are are usually big. The money they spend on 10 developers for support could pay for 100 developers once you add in license fees, etc, etc.
Long, long ago when I worked at Nortel (a now defunct, but then huge telecommunications company), they used to pay millions of dollars a year to Cygnus to support a particular embedded version of GCC. This, despite the fact that Nortel had more than 10k programmers on staff including a compiler team!
I think the real reason these support contracts exist is because companies (even large ones) don't want to dilute their focus maintaining projects that are peripheral to their core business. It's not so much a technical problem, or a money problem -- it's a management problem. They can't scale out to handle every little thing.
I think OSS is a red herring in this conversation. Most companies just don't care about that. They don't want to support it themselves (even if they are big enough to do so), and they need to have confidence in the company that provides the support. Build that company (hint: you need to be sales heavy!) and you could sell Postgresql just as easily as any other database. Of course breaking into an entrenched area in Enterprise software is always going to be difficult, so I'm not sure how successful you would be with this particular product, but you get my point, I think.
There are several companies which sell PostgreSQL like that with EnterpriseDB and 2ndQuadrant being the two largest ones. It seems like these companies are at least semi-successful since they hire more people all the time. So I agree with your idea, that you just need to convince the enterprise customers that you are a reliable partner.
Sure OSS has a longer lifecycle because a dev can lookup the source code and fix it. But companies don't want to spend twice. For example in case of a DB, they would rather want a DBA to manage the DB. They don't want to hire a developer and a DBA - that's how they view it. Sure if you can find someone who is good at both but they are few and far in between. It is much easier to have an Oracle DBA manage Postgres with paid support than find a developer with enough programming under his belt to ensure he can take care of Postgres issues.
As hindsightbias puts it they want solutions and 24*7 support.
The author cites research that ties a downfall in civic engagement to the demise of local news coverage. While I find this correlation to be obvious (to my line of thinking), she assigns no responsibility for this demise to her peers -- the ones writing news and opinion.
In the large Midwestern cities in which I have lived, local newspapers generally choose to align themselves on the side of local governments and chambers-of-commerce on virtually every new development subsidy or tax deal regardless of the costs to be incurred by local residents and businesses and/or the sketchiness of the scheme.
Readers look to the fourth estate for a voice when elected representatives collude with special interests. If they are merely mouthpieces and cheerleaders for those in power, readers will look elsewhere or disengage.
Small-town papers are beholden to small-town interests. Stone notes that the large-city and national dailies (NY Times, Washington Post, LA Times, possibly the Boston Globe, the Wall Street Journal at the itme) were freed of dependence on any one advertiser (or political interest). That may have been a peculiar circumstance of the 1960s and 1970s.
The last observation in that interview is still current.
> The president, irrespective of who he is, today, is so powerful that the temptations of the office for good or evil are too great for any one man. I think we ought to begin to dismantle the office. I think we ought to have a head of state symbolizing the country around whom the natural feeling of patriotism and reverence accrue and separate him from the head of the government.
This will happen naturally as small scale self e-gocernance takes on more and more responsibility. As the responsibilities of the state dry up, city, state, and federal government figures naturally become more figureheads. It will be a race to who has the best parades and funnest hats.
I have a site that creates a subdomain for each new enterprise account and all subdomains relay on one StartCom wildcard cert. Ultimately, I can write a script to create a let's encrypt cert for each new subdomain but I've got plenty of other work on my plate at the moment.
There is no way this feature saw any acceptance testing prior to release.