Hacker News new | past | comments | ask | show | jobs | submit | adultSwim's favorites login

we've been tracking the deepseek threads extensively in LS. related reads:

- i consider the deepseek v3 paper required preread https://github.com/deepseek-ai/DeepSeek-V3

- R1 + Sonnet > R1 or O1 or R1+R1 or O1+Sonnet or any other combo https://aider.chat/2025/01/24/r1-sonnet.html

- independent repros: 1) https://hkust-nlp.notion.site/simplerl-reason 2) https://buttondown.com/ainews/archive/ainews-tinyzero-reprod... 3) https://x.com/ClementDelangue/status/1883154611348910181

- R1 distillations are going to hit us every few days - because it's ridiculously easy (<$400, <48hrs) to improve any base model with these chains of thought eg with Sky-T1 recipe (writeup https://buttondown.com/ainews/archive/ainews-bespoke-stratos... , 23min interview w team https://www.youtube.com/watch?v=jrf76uNs77k)

i probably have more resources but dont want to spam - seek out the latent space discord if you want the full stream i pulled these notes from


Here are the notes I wrote for myself as a magic player, to translate it into purely MTG terms. (These probably aren't enough to explain on their own, but they'll probably help MTG players who want to get the gist.)

Your opponent has 21 life and you win when your creatures have at least that much power. You can’t attack.

Setup: dealer goes second and starts with 6 cards, opponent starts with 5 cards. Hand limit of 7.

On your turn: Either play 1 card or draw 1 card

Point cards (ace - 10; ace is 1) are creatures with power equal to their point number. Face cards (and sideways 8) are enchantments. No lands or mana costs. "Playing" a card refers to casting that card or channeling that card.

Every point card has “channel - discard this card: Choose a creature with lesser value. Destroy it.” (suit matters, spades > hearts > diamonds > clubs, e.g., 8 of hearts is greater value than 8 of diamonds or any 7 but less than 8 of spades or any 9.) Note that this doesn't target.

Most point cards can be played as sorceries for an alternate effect:

Ace: wrath of God

2: disenchant OR muddle the mixture (this is the only instant and does not count toward your 1 card per turn limit. Everything else is sorcery speed)

3: regrowth

4: mind rot

5: divination

6: tranquility / back to nature

7: mind’s desire

8: sideways as enchantment - glasses of Urza

9: aura extraction*

10: none

Face cards are exclusively enchantments:

Jack: control magic**

Queen: Privileged position***

King: reduce your opponent’s life total based on the number of kings you control for as long as they remain on the battlefield: 0: 21; 1: 14; 2: 10; 3: 7; 4: 5.

Notes: The card types are pretty explicit - muddle the mixture can only counter sorceries or instants, not creatures, enchantments, or channeling. Wrath of god only kills creatures, tranquility only kills enchantments.

Rules can differ, depending on the source:

* sometimes as "reflector mage for enchantments", sometimes as "unsummon for enchantments". **sometimes as "exchange control of target creature". ***sometimes as "all permanents you control have hexproof", I.e., including itself.


Kenney is awesome.

OpenGameArt.org (OGA) has a lot of libre/free assets (Kenney often posts on OGA):

https://opengameart.org/

Itch.io also has many CC0 and CC-BY licensed assets:

https://itch.io/game-assets/assets-cc0

https://itch.io/game-assets/assets-cc4-by


Kasm [1] also has ready-to-use images that work similar. They are also customizable to contain own applications or configuration. Intended to be used with their Kasm Workspaces solution, but they also work standalone just fine.

[1] https://hub.docker.com/u/kasmweb


This is a great survey. Combine it with the courses below for best results:

https://www.trybackprop.com/blog/top_ml_learning_resources


This sounds very interesting. I hadn't heard of TEMPO.

Reference and data are spread across orgs/sites.

Here's what Level 1, L2, and L3 mean: https://tempo.si.edu/data_for_scientists.html

ASDC has info about data formats etc (as PDFs ofc) https://asdc.larc.nasa.gov/project/TEMPO

And Earthdata seems to be the warehouse + visualizer https://search.earthdata.nasa.gov/search?fpj=TEMPO&as[projec...


Stanford's NLP Group has a good list of more specialized NLP coursers ( as well as CS224N, basically their CS388) - https://nlp.stanford.edu/teaching/

CS 124: From Languages to Information

CS224n: NLP with DL from Stanford

CS224U: Natural Language Understanding (Lecture Videos)

CS224S: Spoken Language Processing

CS276 : Information Retrieval and Web Search

CS324 - Large Language Models

LING 289: History of Computational Linguistics

Some others are below https://nasmith.github.io/NLP-winter22/about/

https://www.cs.princeton.edu/courses/archive/fall22/cos597G/

https://self-supervised.cs.jhu.edu/fa2022/ (has a list of other NLP courses at the bottom)

http://demo.clab.cs.cmu.edu/NLP/ (has a list of other NLP courses at the bottom)

I found it useful to compare various school's NLP courses when doing my own learning for different view points.


Perhaps one of my most favorite quotes I've read on hackernews:

"This was one of those big eye opening moments for me. Consultants are hired mercenaries in coporate warfare, they don't care about you, they don't care about your company or the rivalries or the squabbaling. You pay them a bunch of money to come run roughshod over your enemies by producing reams of analysis and Powerpoints, to fling the arrows of jargon, and lay siege to your enemies employees by endlessly trapping them in meetings and then they depart. Consultants are brought in to secure your flank, to provide air cover and to act as disposable pawns in interoffice combat.

They are not brought in to solve problems, to find solutions, or because of their incredibly acumen. It's because they have no loyalty or love but money."

- Kneebonian


Hugging Face released a Colab Notebook for generation from SDXL Turbo using the diffusers library: https://colab.research.google.com/drive/1yRC3Z2bWQOeM4z0FeJ0...

Playing around with the generation params a bit, Colab's T4 GPU can batch-generate up to 6 images at a time at roughly the same speed as one.


Integral Neural Networks (CVPR 2023 Award Candidate), a nifty way of building resizable networks.

My understanding of this work: A forward pass for a (fully-connected) layer of a neural network is just a dot product of the layer input with the layer weights, followed by some activation function. Both the input and the weights are vectors of the same, fixed size.

Let's imagine that the discrete values that form these vectors happen to be samples of two different continuous univariate functions. Then we can view the dot product as an approximation to the value of integrating the multiplication of the two continuous functions.

Now instead of storing the weights of our network, we store some values from which we can reconstruct a continuous function, and then sample it where we want (in this case some trainable interpolation nodes, which are convoluted with a cubic kernel). This gives us the option to sample different-sized networks, but they are all performing (an approximation to) the same operation. After training with samples at different resolutions, you can freely pick your network size at inference time.

You can also take pretrained networks, reorder the weights to make the functions as smooth as possible, and then compress the network, by downsampling. In their experiments, the networks lose much less accuracy when being downsampled, compared to common pruning approaches.

Paper: https://openaccess.thecvf.com/content/CVPR2023/papers/Solods...

Code: https://github.com/TheStageAI/TorchIntegral


I used to work in the securities division of Goldman. Traders are generally amazing at excel and one way to completely lose their respect is to suck at excel. I've more than once seen a trader look on in horror as a programmer fumbles around in some excel model the trader has asked for help with. At one point I had to turn some genuinely insane excel models (think 10mins to load the sheet, 10-15mins to recalc) into code (my version of that model ticked >1000x per second and had more functionality than the original), so I learned to get good at excel and learned the kinds of things people who are amazing with it are able to do and how powerful it is.

So with that here's one super easy tip and one foundation pathway for long-term learning.

The super-easy tip is to learn some basic shortcuts[1] so you can move quickly and get shit done without constantly reaching for the mouse. In particular, learn the "Ctrl - arrow" shortcut (move to the end of the contiguous data in the direction of the arrow), the "Ctrl + Home" shortcut (Move to the start/top left corner of the spreadsheet) and realise that when you're holding down shift this means you can select regions of data for cut and paste or other operations really quickly. Also learn:

1) If you're using a mac you'll need to turn off some of the exposé features or rebind them if you want decent excel keyboard shortcuts. Small price to pay imo but your opinion may differ of course.

2) If you're using anything other than an archeological version of excel you're going to have to come to terms with that stupid ribbon thing. Luckily from a keyboard shortcut pov the ribbon means you have one simple set of shortcuts to learn to access any icon on the ribbon from your keyboard so learn it. On PC just press "Alt" and your ribbon will light up with all the keyboard shortcuts for everything from the ribbon on Mac I can't remember how you do this thing and a quick scan doesn't reveal. My chops are a little rusty because I only ever really used excel seriously on windows.

OK now you won't be painfully hobbling about the app one row or column at a time and reaching for the mouse the whole time, the long-term learning. Understanding the power of excel comes down to realising you are editing a model of a graph computation, then learning and understanding a few key features which are really powerful, and will hint at other directions to explore to learn more. I'll give you some examples, but these really are the tip of an incredibly huge iceberg.

1) Autofiltering

Put yourself in a sheet where you have column headings at the top and one contiguous block of data in rows and columns beneath. Go ctrl-home to move to the top left of your data, then go shift-ctrl-end (or shift-ctrl-right and shift-ctrl-down) to go to the end of your data. All your rows and columns should now be selected. Now click on the "AutoFilter"[2] icon or if you've been paying attention to 1, use "Alt" to choose the icon of a hopper thing on your ribbon. It says "filter" next to it and the link below has a picture. This allows you to sort and filter your data in very flexible ways with a UI that's very intuitive for non-technical users. I often point UX folks at this feature when they (inevitably) come up with a pale shadow of this capability.

2) Pivot tables

With your data still selected, go to "Insert" on your ribbon and go "Pivot table", select "new worksheet". OK here you have a thing that basically does a select ... group by on your data with various aggregations, sorting, filtering and a bunch of related functionality all in a pretty simple wrapper. Play around and get familiar with this. You can do amazing things really quickly with pivot tables. Yes I know you can do all this and more in pandas but your mba colleague can do this with excel in seconds and they can't write a line of code. You may be getting a sense of why people consider excel powerful.

3) Vlookup, hlookup, sumif, countif and friends

OK that was the entry-level drug, now go find out about vlookup, and sumif. These are simple functions that look data up in a table. Typically vlookup takes a sorted table on some reference sheet, looks up some key in the leftmost column and gives you back the value of some cell in that row. Realise this adds higher-order dependencies to the graph of your computation. People use this to do amazing shit with vlookup.

Sumif is a simpler lookup. It takes a table and a predicate and sums up values matching that predicate. It is often used to look up single values where the table isn't sorted but you know you only have one of each key.

4) Index, Indirect and Address

We've gone too far to stop now. If you're writing a sheet that uses these you already know you are a bad person and don't care. these are the `eval()` of excel, allowing you to construct arbitrary references to cells or arbitrary functions as strings, dereference and evaluate them. You can then compose these into other functions. More details of this depravity can be found here[3]. It always makes my day if I am making a sheet that requires any of these functions.

[1] https://support.microsoft.com/en-gb/office/keyboard-shortcut...

[2] https://support.microsoft.com/en-us/office/use-autofilter-to...

[3] https://support.microsoft.com/en-us/office/lookup-and-refere...


The big feature here is the function calls, as this is effectively a replacement for the "Tools" feature of Agents popularized by LangChain, except in theory much more efficient since it may not require an extra call to the API. In the case of LangChain which selects Tools and their functional outputs through JSON Markdown shennanigans (which often fails and causes ParsingErrors), this variant of ChatGPT appears to be finetuned for it so perhaps it'll be more reliable.

While developing a more-simple LangChain alternative (https://github.com/minimaxir/simpleaichat) I discovered a neat trick for allowing ChatGPT to select tools from a list reliably: put the list of tools into a numbered list, and force the model to return only a single number by using the logit_bias parameter: https://github.com/minimaxir/simpleaichat/blob/main/PROMPTS....

The slight price drop for ChatGPT inputs is of course welcome, since inputs are the bulk of the costs for longer conversations. A 4x context window at 2x the price is a good value too. The notes for the updated ChatGPT also say "more reliable steerability via the system message" which will also be huge if it works as advertised.


Hey thanks for posting, I'm one of the main devs of Hydroflow. It is not built on top of timely nor differential (we have those deps for benchmarking). The design goals are a little different, Hydroflow aims to be faster and lower level, and with fewer unnecessary clocks. Hydroflow is also single-node, scaling is done with explicit networking rather than thru the runtime. It's the lowest level of the Hydro stack.

Hydro homepage: https://hydro.run/

For a fun easy demo check out the Hydroflow surface syntax playground: https://hydro.run/playground

More info on the Hydro project and stack, CIDR '21: https://hydro.run/papers/new-directions.pdf

Info on the "lattice flow" model in Hydroflow: https://hydro.run/papers/hydroflow-thesis.pdf

e: Also happy to answer any questions! :)


My running note on self-hosted password managers:

1password: since version 8, dead due to cloud-only-now, not standalone, its over-usage of Electron web and its many unverified modules/libraries; remote storage of password only in encrypted form. Key stays offline.

vaultwarden: yet another Electron web app and its usage of many unverified modules/libraries; remote storage of password only in encrypted form. Key stays offline.

KeepassXC, with syncthing: leading contender, best-self-hosted solution that stores password remotely only in encrypted form. but still has iOS unverifiable source code imposed by Apple. Key stays offline.

NordPass: best zero knowledge remote storage; has apps for Windows, macOS, Linux, Android, and iOS. When it comes to browser extensions, one would be hard-pressed to find a wider selection. You can install NordPass on Chrome, Firefox, Safari, Opera, Brave, Vivaldi, and Edge. Not open-source.

LassPass, hacked in 2022; remote storage of raw passwords

pwsafe, still is the safest CLI-only solution to date. The design of pwsafe (Password Safe CLI) got started by Bruce Schnier, the crypto security privacy expert. In pwsafe, unbroken TwoFish algorithm is still being used instead of currently safer Argon2i, simply because it's faster (after millions of iterations). The recommended client-wise of PasswordSafe is still Netwrix (formerly MATESO of Germany) PasswordSafe with YubiKey but stay away from its web-client variants due to ease of memory access to JavaScript variable names (by OS, browser, JS engine, and JS language)

Only downside for ANY PasswordSafe-design GUI client is trusting yet another app repository source.


Washington Street Studios' pottery playlist. Taught me a ton about ceramics. Sadly the lecturer Phil Berneburg has since passed away, but it's more or less the equivalent of the theoretical side of an undergraduate degree in ceramics.

https://www.youtube.com/playlist?list=PLS6Mrdpt53RyauAg8bGN-...


Norman Wildberger's "Wild Linear Algebra" series

https://www.youtube.com/playlist?list=PLIljB45xT85BhzJ-oWNug...

His geometry-centric approach to linear algebra was exactly what I needed to finally grok the subject. Topics like matrix multiplication and discriminants went from "why are they defined like this? it makes no sense?" to "of course that's how you multiply matrices because it's the only logical answer".

It's only later that I discovered Wildberger has some ~strange~ very interesting ideas regarding imaginary numbers, but these ideas don't detract one bit from his presentation of linear algebra. Highly recommended viewing for anyone who is keen on neural networks and machine learning but struggles with understanding the underlying mathematics.


Oh hi!

I'm working on a second book: Practical Math for Programmers. Some details at https://pmfpbook.org/

Every weekend I livetweet my notes and research on the new book over at j2kun@mathstodon.xyz, e.g., https://mathstodon.xyz/@j2kun/110283189611214753

Happy to entertain any ideas folks have for topics! Got a long backlog to go through, but there's always room for more.


Check out https://neal.fun!

Looks like Nolano.org's "cformers" includes a fork of llama.cpp/ggml by HCBlackFox that supports the GPT-NeoX architecture that powers EleutherAI's Pythia family of open LLMs (which also powers Databrick's new Dolly 2.0), as well as StabilityAI's new StableLM.

I quantized the weights to 4-bit and uploaded it to HuggingFace: https://huggingface.co/cakewalk/ggml-q4_0-stablelm-tuned-alp...

Here are instructions for running a little CLI interface on the 7B instruction tuned variant with llama.cpp-style quantized CPU inference.

    pip install transformers wget
    git clone https://github.com/antimatter15/cformers.git
    cd cformers/cformers/cpp && make && cd ..
    python chat.py -m stability
That said, I'm getting pretty poor performance out of the instruction tuned variant of this model. Even without quantization and just running their official Quickstart, it doesn't give a particularly coherent answer to "What is 2 + 2"

    This is a basic arithmetic operation that is 2 times the result of 2 plus the result of one plus the result of 2. In other words, 2 + 2 is equal to 2 + (2 x 2) + 1 + (2 x 1).

Here's a link to open up and explore that training data in Datasette Lite: https://lite.datasette.io/?json=https://github.com/databrick...

You can do it yourself use a track of your fav rapper without background music and feed it into tortoise-tts and play with the heat.

You click on the comment's time stamp to go to the comment detail view. There will be among others a favorite button.

The one tail number that the initial Washington Post article linked to is N39MY[1], which is a Cessna 182T registered to NG Research, PO Box 722 in Bristow, VA. That company's web presence is close to zero, basically below the noise floor.

If you google [po box bristow va] you find FAA records for a bunch of other oddly named companies that all have similarly close-to-zero web presence and addresses that are PO Boxes in Bristow: FVX Research, NBR Aviation, NBY Productions, OBR Leasing, OTV Leasing, PSL Surveys, PXW Services. They all seem to like Cessna 182Ts.

If you Google the tail numbers of aircraft registered to those companies, you start to find forum and mailing list posts (often at sites that tilt toward paranoid/conspiracy/right wing, but not always) with people discussing these specific tail numbers and linking them to the FBI. Some of the supposed evidence includes details of radio communications that people have heard, e.g. talking about "being on station" or using callsigns that start with JENNA, JENA or ROSS, which are supposedly used by the FBI. Other posts claim that DOJ/FBI surveillance aircraft often squawk 4414 or 4415 on their transponders.

I monitor aircraft in Los Angeles using an RTL-SDR dongle. I keep a database of almost every transponder ping I receive. You can see some more info, analysis and examples of stuff I've seen (U-2, AF1, AF2, EXEC-1F, E-6 "Doomsday" planes) at http://viewer.gorilla-repl.org/view.html?source=github&user=... I decided to check my database for planes that have squawked 4414/4415 or used one of the suspicious callsigns: I found 8 aircraft in the past 2 months, several of which exhibit suspicious behavior: Flying for hours at a time without going anywhere in particular (I don't have position information for them, but I know they're in the air and not leaving the LA area), flying almost every day for months at a time, squawking 4414 or 4415, and one that used a JENNA callsign. 2 of them are registered to companies with PO Boxes in Bristow, VA. Another is registered to AEROGRAPHICS INC. 10678 AVIATION LN, MANASSAS VIRGINIA, which googling shows has also been linked to the FBI/DOJ. Several others are registered to WORLDWIDE AIRCRAFT LEASING CORP and NATIONAL AIRCRAFT LEASING CORP in Delaware, similar to other suspected FBI front companies (e.g. Northwest Aircraft Leasing Corp. in Newark, Delaware[2]).

(I call what I'm doing "persistent sousveillance": using historical sensor data to retroactively identify and track new subjects, it's just that my subjects are the government. One of the surprising things I've found is that all you need to do is look: the weird stuff jumps out right away, e.g. Cessnas registered to fake-sounding companies that loiter overhead for hours every day.)

It's a lot of circumstantial evidence, but at this point it doesn't seem far-fetched that I'm monitoring aircraft involved in persistent FBI aerial surveillance.

My twitter has more info: https://twitter.com/lemonodor/status/595814966382469120

Edit: One other thing worth mentioning is that I was surprised at how many local news stories I turned up while googling these planes & companies that fit the template of "Citizens complain about mystery Cessna flying low, circling over their neighborhood".)

[1] http://registry.faa.gov/aircraftinquiry/NNum_Results.aspx?NN... [2] http://www.wired.com/2006/06/mystery_planes_/


Something I generally keep in mind about articles posted to HN:

A large portion of the HN audience really, really wants to think they're smarter than mostly everyone else, including most experts. Very few are. I'm certainly not.

Articles which "debunk" some commonly held belief, especially those wrapped in what appears to be an understandable, logical, followable argument, are going to be cat nip here.

Articles like this are even stronger cat nip. If a member of the HN audience wants to believe they're mostly smarter than mostly everyone else, that includes other members of the HN audience.

So, whenever I read an article and come away thinking that, having read the article, I'm suddenly smarter than a huge number of experts, especially if, like the original article, it's because I understand "this one simple trick!", I immediately discard that knowledge and forget I read it.

If the article is right, it will be debated and I'll see more articles about it, and it'll generate sufficient echoes in the right caves of the right experts. Once it does, I can change my view then.

I am not a statistician, or a research scientist. I have no idea which author is right. But, my spider sense says that if dozens of scientific papers, written by dozens of people who are, failed to notice their "effect" was just some mathematical oddity, that'd be pretty incredible.

And incredible things require incredible evidence. And a blog post rarely, if ever, meets that standard.


There was a limited time window in the early 2000s where many cars used only obfuscated access or a cryptographically insecure PIN code for key enrollment, but most modern cars use an attempt at cryptographic security with a centralized server.

If you want to see what's possible with modern cars, keywords like "VVDI" or "Abrites" and "All Keys Lost" will show you what aftermarket tools are capable of. Generally speaking, the capabilities in these tools are roughly equivalent to those the most sophisticated criminals have, as they're usually just stealing the techniques from one another in a big circle.

The level of security varies heavily from manufacturer to manufacturer.

For example, most modern VW cars require using an ECU exploit (which depending on the specific ECU, almost always requires physically removing the control unit and sometimes requires opening it) to extract encryption key data (CS/MAC) or physical extraction of the instrument cluster EEPROM.

However other manufacturers like Toyota seem to be more vulnerable to other exploits (I only research VW for the most part, so I frankly have no idea what's going on here), including a bizarre process which seems to require disassembling the steering column and unplugging a connector.


Let me explain Docker for Mac in a little more detail [I work on this project at Docker].

Previously in order to run Linux containers on a Mac, you needed to install VirtualBox and have an embedded Linux virtual machine that would run the Docker containers from the Mac CLI. There would be a network endpoint on your Mac that pointed at the Linux VM, and the two worlds are quite separate.

Docker for Mac is a native MacOS X application that embeds a hypervisor (based on xhyve), a Linux distribution and filesystem and network sharing that is much more Mac native. You just drag-and-drop the Mac application to /Applications, run it, and the Docker CLI just works. The filesystem sharing maps OSX volumes seamlessly into the Linux container and remaps MacOS X UIDs into Linux ones (no more permissions problems), and the networking publishes ports to either `docker.local` or `localhost` depending on the configuration.

A lot of this only became possible in recent versions of OSX thanks to the Hypervisor.framework that has been bundled, and the hard work of mist64 who released xhyve (in turn based on bhyve in FreeBSD) that uses it. Most of the processes do not need root access and run as the user. We've also used some unikernel libaries from MirageOS to provide the filesystem and networking "semantic translation" layers between OSX and Linux. Inside the application is also the latest greatest Docker engine, and autoupdates to make it easy to keep uptodate.

Although the app only runs Linux containers at present, the Docker engine is gaining support for non-Linux containers, so expect to see updates in this space. This first beta release aims to make the use of Linux containers as happy as possible on Windows and MacOS X, so please reports any bugs or feedback to us so we can sort that out first though :)


This really tempts me to go back to Linux as my "daily driver".

My main issue is working from home, and connecting to my work's VPN. We use Pulse Secure, and it does a host scan that only works on Windows and Mac OS X.

Has anyone had any experience with getting Pulse Secure running under Wine and having it trick a corporate VPN host-checker that it is indeed a compliant version of Windows?


If you liked the old Fruity, you might enjoy this: https://dunkadunka.com

I think there's a healthy symbiosis between light "state harassment" of Ai and Ai's notoriety as a dissident.

Ai has been told by the security service that his existence is very important to the state. I think this is because his existence / story demonstrates two things: the state does express its disagreement and can detain / punish, the state allows provides an environment open to criticism and does allow people to express themselves.

Ai is high profile and so the allowance of continued activity of a high profile dissident by a state which is in many cases criticized as getting rid of critics is an important counter narrative that the state wants, I believe.

But I think it's more than that also. Ai is very Chinese and represents something very Chinese: a native cultural ideal, a person who is globally recognized as excelling in his field. His work is not just critical of his homeland, it comments on control in many places.

So I think Ai's case, on the whole, functions more as positive rather than negative propaganda for China.

One further and more controversial point I'll make is that Ai is in the same business as Chinese PR, shaping / playing with perceptions. I think Ai has realized for a long time that art coupled with subtly sensational criticism of his homeland gets him more traction in the global art scene, than just art. This is another aspect to the symbiosis. Ai's criticism of China, permitted by China, also gives China a larger profile in the cultural world through Ai. I admire the cleverness of these instances where the Chinese use Western cultural biases against us cleverly for their own benefit. Meaning that often in the West people are looking to fill in a narrative about the Chinese state being bad, and Ai, by supplying that demand in a satisfying way for us, enhances his career, and actually, I believe, subtly promotes / enhances China's image, using the very narrative demand the West has against the West's intention with it. In other words, the West, by elevating Ai, also elevates China, probably against their intent to do so, and precisely because, I believe, China knows how the play the demand for narrative bias to its advantage.

But even if people in Western civil society organizations understand this, what choice do they have? They want to promote a certain set of values, so they have to use Ai as a persona. So Ai's existence is probably a win-win for all, rather than just the seeming clear win for the anti-China camp that his dissident status on the surface suggests.

There's probably instances of the West using this kind of "appetite for bias" to its own advantage as well but I have not reflected as much about it.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: