Hacker News new | past | comments | ask | show | jobs | submit | d0vs's comments login

How would I go about using my own style?


Having read the docs I'd assume you need to pass your styles using a `config` prop.


Yeah, for now. I'll eventually add some way to pass in custom classes or something.


I personally find a prop to be a perfectly acceptable solution - you can define styles in your JS and share between components if need be. But I reckon there are folks who'd prefer passing class names to define their styles in CSS.


Apple build overpriced low-end perf laptops with high build quality, I'd rather have performance at a way lower price.


Problem is: you're already pretty well catered to I should imagine. That's literally 90% of the OEM Laptop market competition.


The "build quality" means it looks nice, but it sure isn't durable. If you spill liquid on an Apple, it's a dead machine. If you drop an Apple, it's also dead. Thinkpads can survive a hell of a lot more abuse.


Every time a friend or colleague asks why I use "that" I simply thrown my laptop to the floor, pick it up and continue working on it.

Then I ask them to try the same with their Apple laptop.

That normally makes them never ask such questions again.

I learned that from an IBM (and later a Lenovo) sales person when presenting the X series. It's designed and built to tolerate real use. And some abuse :-)


People's needs and preferences differ.

I've used laptops for the last 10-15 years, often 8+ hours a day. I have not once dropped a laptop during those years. I'm completely okay with having a laptop that disintegrates when dropped and is better in other ways (thinner, lighter, longer battery life) as a result of not focusing on resistance to drops.


I guess it depends on your laptop. I dropped Macbook once (it was closed and there was a big scratch on case), but otherwise I'm very careful with it. I didn't drop my iPhone either. But when I owned old indestructible Nokia phone, I dropped it few times a week, just because I didn't really care. If I would own indestructible laptop, I might drop it as well sometimes. If I'm laying on bed with laptop and I want to sleep, I have to carefully position laptop on the floor. But I would happily threw it away, if I could, it would be so much easier.


Definitely agree on "If you drop an Apple, it's also dead.", but "If you spill liquid on an Apple, it's a dead machine." hasn't been true for me: Spilled a whole glass of water on my MBA which was running and it immediately turned off. After drying it on the heater for half a day it turned back on without any problems or damage.


Check out Louis Rossmann's Apple repair channel on YouTube. The vast majority of boards he repairs are broken because of liquid damage (he does actual board-level repair using microsoldering, unlike Apple which merely replaces everything). It's definitely a huge problem.


But isn't this a problem for nearly all laptops?


Mythbusted: "It's a more durable laptop because it's made of aluminum"

https://www.youtube.com/watch?v=t7XSckjRPo0


What does this have to do with liquid damage?

(Sorry can't watch the video right now because I'm on mobile data)


I don't believe that I actually want improved performance. I just want a smooth laptop that is thin, has long during battery life, and is price competitive. Essentially a MacBook at a reasonable price.


Downvoted: opinion presented as fact


Compare the cheapest Mac ($1000) to an $800 ASUS:

https://www.apple.com/shop/buy-mac/macbook-air

https://www.amazon.com/i5-6300HQ-keyboard-Microsoft-signatur...

The ASUS has:

- a larger screen

- a better CPU

- a better GPU

- as much memory

- more storage


-disappointing screen quality(size isn't the only thing that matters)

-twice the weight

-1/4th the battery hours

-one of the worst touch pads around


That's maybe not a great comparison, it's a different class of product with worse battery life and much heavier. The MacBook Air has been optimised for a different use case.


The MacBook Air is an outdated product.

The closest real comparison is the MacBook Pro to the Surface Pro, in which case the MacBook is better at every price tier(expect for the touchscreen which is totally useful).


...an outdated product for $1k.



Chrome-only and that's if there's no bug. I have the latest version of Chrome on Ubuntu but my GPU (GTX 960m, latest drivers) is blocked and the --ignore-gpu-blacklist flag doesn't work.



Can anyone give concrete examples of datasets that are better suited for a graph database and why?


Anything Social. Product or Person hierarchies. Network datasets. Ancestry (genetic or data), etc.

They are better suited for Graph Databases because the queries tend to be many joins traversing paths both deep and wide.


What's it called exactly? I can't find it.


Sorry I was thinking of autofill suggestions:

"Substring matching for Autofill suggestions" chrome://flags/#enable-suggestions-with-substring-match

For the address bar it already does substring for me pretty well so I can't comment on that, sorry about that.


I wonder who will win in the long term between Brotli and zstd. http://facebook.github.io/zstd/


There are some design decisions in Brotli I just don't quite understand [1][2][3], like what's going on with its dictionary [2]. One of the Brotli authors is active in this thread, so perhaps they can talk about this.

Zstandard is pretty solid, but lacks deployment on general-purpose web browsers. Firefox and Edge have followed Google's lead and added or about to add support for Brotli. Both Brotli and Zstandard see usage in behind-the-scenes situations, on-the-wire in custom protocols, and the like.

As for widespread use on files-sitting-on-disk, on perhaps average people's computers, I think we're quite a few years and quite some time away from replacing containers and compressors that have been around for a long time, and are still being used because of compatibility and lack of pressure to switch to a non-backwards-compatible alternative [4].

[1] https://news.ycombinator.com/item?id=12010313 [2] https://news.ycombinator.com/item?id=12003131 [3] https://news.ycombinator.com/item?id=12400379 [4] https://news.ycombinator.com/item?id=13171374


> https://news.ycombinator.com/item?id=12003131

This is some sort of misunderstanding. If one replaces the static dictionary with zeros, one can easily benchmark brotli without the static dictionary. If one actually benchmarks it, one can learn the two things:

1) With the short (~50 kB) documents there is about an 7 % saving because of the static dictionary. There is still a 14 % win over gzip.

2) There is no compression density advantage for long documents (1+ MB).

Brotli's savings come to a large degree from algorithmic improvements, not from the static dictionary.

> https://news.ycombinator.com/item?id=12010313

The transformations make the dictionary a small bit more efficient without increasing the size of the dictionary. Think that out of the 7 % savings that the dictionary brings, about 1.5 % units (~20 %) are because of the transformations. However, the dictionary is 120 kB and the transformations less than 1 kB. So, transformations are more cost efficient than basic form of the dictionary.

> https://news.ycombinator.com/item?id=12400379

Brotli's dictionary was generated with a process that leads to the largest gain in entropy, i.e., every term and their ordering was chosen for the smallest size -- considering how many bits it would have costs to express those terms using other features of brotli. Even if results looks disgusting or difficult to understand, the process to generate it was quite delicate.

The same for transforms, but there it was mostly the ordering that we iterated with and generated candidate transforms using a large variety of tools.


ZSTD.

It is superior to Brotli in most categories (decompression, compression ratios, and compression speeds). The real issue with Brotli is the second order context modeling (compression level >8). Causes you to lose ~50% compression speed for less then a ~1% gain in ratios [1].

I've spoken to the author about this on twitter. They're planning on expanding Brotli dictionary features and context modeling in future versions.

Overall it isn't a bad algorithm. Brotli and ZSTD are head and shoulders above LZMA/LZMA2/XZ. Pulling off comparable compression ratios in half to a quarter of the time [1]. They make GZip and Bzip2 look outdated (which frankly its about time).

ZSTD really just needs a way to package dictionaries WITH archives.

[1] These are just based on personal benchmarks while building a tar clone that supports zstd/brotli files https://github.com/valarauca/car


What use case do you have in mind for packaging dictionaries with archives? There is an ongoing discussion about a jump table format that could contain dictionary locations [1].

[1] https://github.com/facebook/zstd/issues/395


For large files >1GiB a library + archive is often smaller then the archive on its own.


How are you compressing the data?

I would expect a dictionary to be useful if the data is broken into chunks, and each chunk is compressed individually.

If the data is compressed as one frame, I would be very interested in an example where the dictionary helps.


In my benchmarks brotli compresses more densely, compresses typically faster to a given density, but decompresses slower.

I benchmark with internet-like loads, not with 50-1000 MB compression research corpora.


When i last ran the numbers a few months ago[1], for the same time spent in the compressor, zstd almost always produced a smaller output than brotli.

1. https://code.ivysaur.me/compression-performance-test/


For now at least, Brotli is the winner. It's already in the browsers.


It baffles me that all those JS physics library never provide proper docs or even an API reference and always link to the C++ Box2D manual as if it was an acceptable alternative. Always have to guess what the JS equivalent is but even then you're in for a surprise: https://github.com/shakiba/planck.js/blob/master/CHANGES.md


I'm guessing it comes down to "writing docs takes lots of time, and when you're a lean operation priority goes to making stuff work."

I'm sure once the project matures it'll get a documentation pass, but the project is quite young.


As a dev of an unrelated lib who cares about making good documentation this is a really valuable comment; I'm guilty of doing this partially for example in https://github.com/franciscop/drive-db where I link to MongoDB platform for the more advanced selectors.

In my experience good documentation takes around 3x-5x of the time of writing the code (excluding tests and tutorials), so while I hate seeing libraries without a decent documentation I totally understand it.

Edit: also it is in alpha, where writing documentation many times backfires you and you have to remove large pieces of it wasting your time.


I've found writing documentation and trying to explain to others why the API is the way it is frequently makes me completely reconsider and redesign the API to be better. So no, you really should start on documentation right away, because what you're calling "waste" right now is a feature.


That is interesting, I think the way I do it is quite similar: normally I write the examples that would go in the documentation first then work around it, but I don't write the whole text+proofreading which is what takes most of the time for me.

Edit: see this "others" folder for example: https://github.com/franciscop/server/tree/master/others


That does help, but I've found that getting people to try using your API does an even better job of informing you about your design decisions.

Last time I was building a novel API, I opted to provide commented example code for documentation, recruited several members of the intended audience to try out the API, and just offered to answer questions directly. I wrote out actual documentation in preparation for the "official 1.0 release."

At some point it comes down to choosing how to spend your limited time. Writing documentation isn't going to be a "waste," but it's probably not high ROI because your early adopters are likely familiar with the problem domain anyway.


Oh cmon I never ever said writing documentation is a waste!

What I mean is that writing the final documentation with all that implies before you even know what the API is going to look like is a waste. Though I agree with the parent, stubbing some docs to get a better feeling of what the docs look like is valuable.


Those are changes to internals, there are only few API differences which are explained on readme page. Anyway, I agree with you, a new API doc would be very helpful.


On one hand, I agree with you that linking to the documentation of another library, in another language, isn't very helpful. On the other hand, it's an open-source project. It has to start somewhere, and referencing the original docs is better than nothing. If you've looked at a variety of libraries along these lines and found them lacking, perhaps you're an ideal person to contribute better docs to this project.


For me, an example is worth 10 pages of documentation. The documentation is good for reference but only after you understand the ideas behind it.

One of the biggest criteria I have for using a library is whether they have a rich set of examples to draw from.


Conversely, after the initial struggle getting the ideas behind it, examples are worthless and documentation is king.


Sometimes. Examples are good at giving concrete working code to use as a basis but they only express a few ideas. Documentation will let you understand the different pieces so you can build what you like.

Both are beneficial for different scenarios. I find I use examples much more often but when examples fail, documentation is necessary.


Good documentation has tons of examples! For me the gold documentation is jQuery. When I didn't know enough JS the beginning was cryptical but the examples below made it, when I was experienced I could just read the beginning for the footprint.


If you're used to dynamic languages, it gives you type safety and performance for little effort.

I love Python but always wished for a simple, type-safe language; Go gives me that. It's not worse than Python IMHO.


Without support for meta-classes, annotations, generators, iterators and list comprehensions it surely is worse.


Unless you're one of the people (like me) who considers its lack of many of those things a feature. More features doesn't necessarily make something "better" and less features doesn't necessarily make it "worse" (whatever your definition of "better" or "worse").


No I am not, I am no longer programming in the mid-90's and even on those days, Turbo Pascal had more features than Go.

I only advocate Go as a replacement for those that would use C for user space applications, or possibly some kind of low level stuff.

If Python and Ruby eco-systems had blessed compilers, instead of just CPython and MRI, I doubt people would be flocking to Go.

We already see this happening in the Ruby world, just let Crystal become a bit more mature.


Goroutines and channels can substitute for generators and iterators, with maybe a downside of being too much more expressive.


Colleagues I've spoken with who use both still say, that Python is "20 times" as productive as Go. For some applications this multiplier likely goes down considerably; but Python holds an edge in a lot of areas.


This looks like an AOT compiler to me, not a JIT compiler.


This is only true in the strictest interpretation of what "JIT compilation" means. Check out the Aycock paper -- http://eecs.ucf.edu/~dcm/Teaching/COT4810-Spring2011/Literat... -- for a broader treatment


You're right. This was an amazing couple of posts anyway, I hope you continue the series!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: