Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've recently started using their micro-ORM Dapper, and I like it a lot. I get the performance of hand-coded SQL, but without the tedious mapping from SqlDataReader to my entity.


I've been using Dapper for around three years, "it just works" and is very fast. I've been around the block a few times with ORM's - LINQ to SQL, NHibernate, Entity Framework, Massive, SubSonic. For small to medium size projects there's no contest, provided you don't mind hand crafting your SQL. Hat's off to Marc, Sam and team for building this.


Dapper is great - I promptly dumped NHibernate and Entity Framework for it. It's the only ORM I've found that makes it trivially easy (as it should be) to use stored procedures and views in queries.


+1 for Dapper - I'm a big fan. It is very fast and we use it to serve our live (mostly read-only) traffic, while using something like Entity Framework for the backend Administrative UI, which has a lot of inserts and updates.


There's a couple of user-contrib projects that do similar things for inserts & updates. Dapper Extensions is the one I'm using.

https://github.com/tmsmith/Dapper-Extensions


I wrote a Dapper extension for working with SQL Server's geospatial queries and types a couple of years ago - have they added anything like that yet? Otherwise I'd be happy to add it.


Marc Gravell added in the ability to put in pretty much any custom type without adding any dependency weight about a month ago. You can see the commit here: https://github.com/StackExchange/dapper-dot-net/commit/e26ee...

Look towards the end at the tests for example of how to hook up a custom type (it's pretty simple).


Cool - the one I wrote was years ago. Looks like it's not necessary any more.


Another library worth mentioning in this space is Massive, which uses .NET dynamics a lot (focusing slightly less on performance than ease of use). https://github.com/robconery/massive


Dapper is nice, but if you want to have the compiler check your queries, and still run fast, I recommend LINQ to DB: https://github.com/linq2db/linq2db

(not to be confused with LINQ to SQL from Microsoft)


To be fair, the compiler is checking against what you told it the database looks like. This can help with lots of things, but still has the fundamental disconnect problem you'll get at runtime with either approach.

A huge problem with the tradeoff we had (well, still have in some areas) is the generated SQL is nasty, and finding the original code it came from is often non-trivial. Lack of ability to hint queries, control parameterization, etc. is also a big issue when trying to optimize queries. For example we (and by "we" I mean, "I made Marc Gravell do it") added literal replacement to Dapper to help with query parameterization which allows you to use things like filtered indexes. In dapper we also intercept the SQL calls to dapper and add add exactly where it came from. Here's what that looks like, replicate for the other methods:

    public static IEnumerable<T> Query<T>(this DataContext db, SqlBuilder.Template template, bool buffered = true, int? commandTimeout = null, IDbTransaction transaction = null, [CallerFilePath]string fromFile = null, [CallerLineNumber]int onLine = 0, string comment = null)
    {
        using (db.Connection.EnsureOpen())
        {
            return SqlMapper.Query<T>(db.Connection, MarkSqlString(template.RawSql, fromFile, onLine, comment), template.Parameters as object, transaction ?? db.Transaction, buffered, commandTimeout);
        }
    }
And here's the our marking method tailored for our code, but you get the idea:

    private static string MarkSqlString(string sql, string path, int lineNumber, string comment)
    {
        if (path.IsNullOrEmpty() || lineNumber == 0)
        {
            return sql;
        }

        var commentWrap = " ";
        var i = sql.IndexOf(Environment.NewLine);

        // if we didn't find \n, or it was the very end, go to the first space method
        if (i < 0 || i == sql.Length - 1)
        {
            i = sql.IndexOf(' ');
            commentWrap = Environment.NewLine;
        }

        if (i < 0) return sql;

        // Grab one directory and the file name worth of the path
        //   this dodges problems with the build server using temp dirs
        //   but also gives us enough info to uniquely identify a queries location
        var split = path.LastIndexOf('\\') - 1;
        if (split < 0) return sql;
        split = path.LastIndexOf('\\', split);

        if (split < 0) return sql;

        split++; // just for Craver

        var sqlComment = " /* " + path.Substring(split) + "@" + lineNumber + (comment.HasValue() ? " - " + comment : "") + " */" + commentWrap;

        return sql.Substring(0, i) +
               sqlComment +
               sql.Substring(i);
    }
This results in a comment at the top of the query like this:

    /* Models\Post.LinkedQuestions.cs@105 */ 
    select top (@top) 
           p.Id 
         , p.Title
         , p.Score
It's such a simple approach using the caller member attributes, but it saves so much time tracking things down.


I'm curious if the comments above exist in the source code, or have been added here for the benefit of readers on HN?


That's a straight copy/paste from the Stack Overflow codebase (Helpers\SqlMapper.cs:332 at time of writing, if you care). Now the typos and snark are out there for everyone to see.


Another awesome one which I've used on a previous project is SqlFu, https://github.com/sapiens/SqlFu




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: