<p>I get why the C++ kind of operator overloading can make things very confusing, such that permitting overloading of built-in operators is probably undesirable, but how about the type of custom binary operators as you see in languages like Fortran. IE, things as such: </p>
<p>X = ((A .myCustomOperator. B) .myOtherOperator. C );</p>
<p>Unlike the C++ style overloading of operators this is very obviously a user-defined entity, and thus it can't cause the same kind of confusion as overloading ^ to mean exponent, or redefining the meaning of the index brackets. It is very nice when dealing with custom numeric types, like matrixes, but because the custom operators have a unique syntax ( defined by a leading and trailing . in the identifier ), they can easily be distinguished from other constructs. </p>
<p>I get that it is probably not a priority, but it is in my opinion a very nice compromise which avoids the worst problems with operator overloading, while still permitting the definition of custom binary operators for the cases where they are needed. </p>
<hr/>**评论:**<br/><br/>Ainar-G: <pre><p>Well, the Go FAQ <a href="https://golang.org/doc/faq#overloading">says</a>:</p>
<blockquote>
<p>Regarding operator overloading, it seems more a convenience than an absolute requirement. Again, things are simpler without it.</p>
</blockquote>
<p>I guess it applies to custom operators as well, so the answer to your question is most probably "no". And honestly, I really wouldn't want that feature. In my opinion, no matter how bad a method name is, it will still be more informative than something like</p>
<pre><code>a := b $. c <<## (d @|@ e)
</code></pre>
<p>Also, how would adding custom operators interact with interfaces? Does that mean that there will now be</p>
<pre><code>type Foo interface {
op *^* (Foo)
}
</code></pre>
<p>? Can there also be an interface with both operators and methods?</p></pre>jerf: <pre><blockquote>
<p>Can there also be an interface with both operators and methods?</p>
</blockquote>
<p>Yes, Haskell does that in typeclasses just fine. That's just a specific answer to your question; there's basically no chance of Go getting custom operators.</p>
<p>From Haskell we also learn that we need to be able to specify order of operations, which it does via numeric levels.</p></pre>waveswan: <pre><p>Well, the way it used to work back when I used Fortran ( they may have changed it since), was that custom-defined operators were constrained by the same naming rules as others identifiers, so both of the examples you mentioned would have been rejected. </p>
<p>That is, the operator <strong>.v_cross_product.</strong> would have been permitted, but <strong>.++.</strong> would not have been. That is what I was on about with the comparison with C++. I very much agree with you that introducing a bunch of arbitrary abstract symbols is probably undesired. </p>
<p>Essentially, it would pretty much just be a shorthand for doing this: </p>
<p>X = (A.foo(B)).bar(C); </p>
<p>Could also be written: </p>
<p>X = (A .foo. B) .bar. C; </p>
<p>It would be nothing more than permitting an infix syntax for calling methods that take exactly two arguments. </p></pre>jerf: <pre><p>Between your discussion of Fortran and you using cross product as your example, I infer you likely want to do heavy-duty math, maybe even scientific computing, in Go.</p>
<p>My advice is don't. It's not a first-class concern of the language and I see no reason to believe it ever will be, barring massive leadership change. Consider Julia instead. Go is very much not designed to try to be the best at everything; it's really quite a focused language and if your problem is not in its focus, or happens to be fairly "close" to it, you're really better off using something else.</p></pre>howeman: <pre><p>There are trade-offs between the languages, as you might expect. Jerf and I have discussed this before. Jerf has the opinion expressed above, while I have been very happy doing scientific Go the last few years. It is definitely true that you can have all of the operators you want in Julia <a href="https://github.com/JuliaLang/julia/blob/master/src/julia-parser.scm">https://github.com/JuliaLang/julia/blob/master/src/julia-parser.scm</a> . At least, I very much hope that's all the operators you could want. </p></pre>jerf: <pre><p>Yes, I almost put the YMMV caveat on my post, but, well, it's sort of always implied anyhow. :)</p></pre>howeman: <pre><p>Indeed, it is always implied. I just disagree with your analysis. What you say seems to be "common wisdom" among Go programmers, even though I think it is false. I know you are not just parroting the view of others, but I don't think it should be common wisdom, and so I'm just expressing the alternate viewpoint :).</p></pre>jerf: <pre><p>You know, if you type up a blog post about your workflow and what you do, I promise to upvote it and not ramble on endlessly in the comments about how the apocalypse is upon us all and everyone should turn to Haskell for salvation.</p>
<p>I'm also not going to stop suggesting it's not what most people are looking for. It seems to me a lot of people see Go, see "concurrency!", and think that makes it a good choice for scientific programming, when they really want parallelism. And I say that as someone who generally doesn't like it when people start slicing and dicing whether something is "concurrent" or "parallel".</p>
<p>I suppose I should also be careful to point out I don't think it's good for <em>mathematical</em> computation. Agent-based simulations that are more logic than math Go would be fine for.</p></pre>howeman: <pre><p>I'd like to start writing things up. A lot of the tools have been in flux, but we're getting close to 1.0 on matrix/mat64. Do you mind expanding on what you mean by "workflow"? The go workflow posts I see talk about middleware handler libraries, database querying, and front-end interop. Right now I am working on algorithmic design for optimization, and my outputs are floating point numbers to the terminal. When it starts working the way I think it should, this will be converted to plots/csv files.</p>
<p>I think it's fair to suggest it's not what many are looking for. If what you want to do is load in some data, perform a couple of simple operations, and make a plot, Go is not the easiest language. It's getting sufficient at it, but it's hard to beat <a href="/r/matlab/python" rel="nofollow">r/matlab/python</a> for that purpose. I think Go is a good language for building the tools to generate that csv in the first place. </p>
<p>Why do you say Go is a bad language for parallelism? I agree it has problems with SIMD. Beyond that though, goroutines are very powerful. Because they're so cheap, you can design your library to use them (within reason) and not need to worrying about destroying the overall performance. Go also has issues at the moment with large-scale parallelism, but I don't see how the fundamentals of Go are bad relative to other languages. You can use MPI semantics just as well in Go (and Go has the added bonus of easy shared memory). If someone could design an efficient "shared memory" cluster, the concurrent Go programs could automatically expand to fill all the cores, achieving parallelism.</p></pre>jerf: <pre><blockquote>
<p>Do you mind expanding on what you mean by "workflow"? </p>
</blockquote>
<p>I mean your personal process, choice of libraries, how you like Go better than the alternatives you used before (if any) because X, Y, and Z, etc. waveswan's post resembles a lot what I mean, except you have experience.</p>
<blockquote>
<p>Why do you say Go is a bad language for parallelism? I agree it has problems with SIMD.</p>
</blockquote>
<p><a href="http://blog.golang.org/concurrency-is-not-parallelism" rel="nofollow">This happens to be the first response on Google</a>, but the idea goes beyond the Go community. People draw a sharp distinction between "concurrency", doing many heterogeneous things at the same time, and "paralellism", doing the same thing to many different bits of data at the same time.</p>
<p>I personally find the distinction somewhat overblown; the tools for concurrency often turn out to be easy to turn to parallelism, making the boundary less sharp than some people like to say. But it is true that Go is really good at concurrency and has basically <em>no</em> support for parallelism. SIMD, as you mention, is basically the opening bid, but there's also GPUs, moving data across clusters, all sorts of other things. Having nice goroutines isn't unique enough, especially when a lot of scientific computing is perfectly happy with straight-up OS threads.</p>
<p>Again, excepting perhaps massive agent-based systems, why do you need <em>millions</em> of threads? If you're good with "one or two per CPU", a ton of other languages have you covered. (And to be clear, there's a lot of middle ground there; modern OSes are happier at much higher numbers of threads nowadays.)</p>
<blockquote>
<p>but I don't see how the fundamentals of Go are bad relative to other languages</p>
</blockquote>
<p>They aren't "bad" in the way that I wouldn't want to write this in Perl or Python without numpy bindings, in which you'd be stuck in a single-threaded world with terrible performance, but I don't see how they are <em>good</em>. You get no operator overloading, the libraries aren't there, etc. The fact that all these problems are solvable, IMHO, is beaten by the fact they are already solved (or, at least, <em>more</em> solved) in other languages, which also have better communities and momentum.</p>
<blockquote>
<p>If someone could design an efficient "shared memory" cluster, </p>
</blockquote>
<p>Well, the wheel turns, but right now it's turning the other direction; NUMA means local computers work more like clusters. You can't dodge the fact that a different computer is probably at least 10 nanoseconds away at the absolute minimum, and can easily be hundreds of nanoseconds away, by sheer physics.</p>
<p>That said, this is all just in reply to your questions. I'd still be interested in hearing what you do.</p></pre>howeman: <pre><p>I still don't understand the emphasis on " 'no' support for parallelism". Take your concurrent program. If you're running Go 1.4 or earlier set runtime.GOMAXPROCS(runtime.NumCPU()). If you're on Go 1.5 your program is already running parallel. Just today I took a Monte Carlo simulation and made in parallel in 30 seconds. Granted, I already knew how to do it, and my function was coded without shared state (good practice anyway), but still, I easily moved from 100% CPU usage to 800% CPU usage.</p>
<p>GPU support doesn't exist at the moment, but I have called cudaBLAS from Go. I didn't turn it into a full interface that can be used with gonum/blas (partly because I don't have a GPU to test), but once done it can be swapped in as the BLAS library. </p>
<p>For one task, you rarely need millions of cores. Monte Carlo can of course use them, but it's not necessary. Where this is important is the interaction of tasks. The proposed quadrature library for gonum allows parallel computation of the function at the quadrature points. Should you execute in parallel? In general it would be great, but what if the function is the norm of the gradient, estimated with finite difference? Gonum also allows finite difference to be executed concurrently. What if the finite-differenced function calls matrix multiplication, which is also coded to be parallel using goroutines. Since goroutines are cheap, this multiplicity is much less relevant than it is with threads. Library designers can use the parallism that is best for them without needing to worry about potential whole-program effects.</p>
<p>I don't understand your comment about NUMA. To me the fact that computers are getting more like clusters means that we should be moving toward one programming paradigm (shared vs. not) rather than having competing models. </p>
<p>Libraries not being there is fair, though a lot of the support is wrapping a set of C functions which Go makes easy. Obviously this depends on what you do. </p>
<p>The reasons why Go is good are the general reasons to like Go. It's a simple composable language that makes writing reusable and readable code easy. A compiler not only catches bugs early, but additionally helps legibility and speed. It's got great tools for testing, profiling, benchmarking, race detecting, and documentation. </p></pre>waveswan: <pre><p>Well, since I started this thread I might as well explain why I like go and would consider using it even for maths and science. </p>
<p>Basically it boils down to go having a very nice combination of type safety, simple control flow constructs, memory management and easy tools for parsing text. Most other languages have harsh tradofs. C/C++ are absolute monsters with manual memory management and string handling is a nightmare. Java and derivatives require the runtime environment, which can easily result in incompatible dependencies between versions. Python, Perl and many other interpreted languages lack type safety. Haskell is nice, until you need some kind of control flow, or try to understand something written by another programmer, or even yourself from a while ago. There are lots of other examples, but it is the same theme. Go is somewhat unique in that it seems to only used tried and tested features that are known to work well together. </p>
<p>Thus I can understand reluctance to support too much genericism and high level features. The last thing you want is something akin to the C++ macro and template system, which leys you redefine the entire language. </p>
<p>Ironically infix function calls would be almost the polar opposite to C++ , since it is one of the few things it will NOT let you do. In C++ custom operators MUST be modified versions of the ones built into the language, which is the exact opposite of sensible, and I suspect it is part of why so many people are hostile to the very notion of custom operators.</p>
<p>Contrast it to simple allowing binary functions to be called with an infix notation, which automatically forces custom operator names to be alphanumeric identifiers, and prevents shenanigans like redefining the meaning of assignments or pointer dereferencing.</p>
<p>Like I said, I understand that it is not a priority and why one might wish to keep the language simple, but it sure is not a C++ feature. In fact, C++ wont allow you to do it unless you seriously abuse the macro and/or template system. </p></pre>jerf: <pre><p>You should double-check Julia, but bear in mind I'm recommending it based on hear-say, not personal experience.</p>
<p>At the moment Go's most likely big problem for you is that it only exposes a model of the computer with no SIMD, no great support for GPUs, etc. It'll outrun naive Python handily, but it'll get creamed by anything you can write that can use SIMD or GPUs (ironically, including numpy)... if your needs fit into those paradigms. If you for some reason really can't do better than a very 2000-ish model of what arithmetic a computer can do because of lots of branching or something, then Go is a good fit. (No sarcasm. I can imagine such tasks.)</p></pre>waveswan: <pre><p>It's more that I rarely do anything where raw performance even matters that much. Typical use case is things like grabbing data from a multi-channel detector, some random document, that XML file somebody else published their data in.. and then convert it into something which can be fed through the actual algorithm. Longest run time I've had so far was a few minutes, and given that it can easily take hours to fix some random bug, I care much more about being able to write comprehensible and somewhat reliable code.</p>
<p>Go seems really nice in this regard. You can easily write imperative code. It is garbage collected. The standard string and regexp libraries make parsing simple, and the type system seems solid.</p>
<p>Also, I can live without operators. Just seemed like a nice feature to have, but it is hardly a make or break thing.</p></pre>howeman: <pre><p>Of course, if numpy counts then surely the gonum BLAS package does as well. You can register any BLAS implementer you'd like for matrix operations (gonum/blas/blas64.Use for the registration, gonum/blas/cgo for the wrapper).</p></pre>howeman: <pre><p>In matrix math, frequently you want to save allocations for speed (see: any discussion in Julia on optimization). While c = a * b looks nice (or pick your overloaded symbol that isn't *), in a loop or something you'll want to reuse the c storage.</p>
<p>In the gonum matrix suite (godoc.org/github.com/gonum/matrix/mat64), one does c.Mul(a, b). This saves allocations and has the added bonus of looking more like infix notation than c = a.mul(b)</p></pre>YEPHENAS: <pre><p>They're called methods. Instead of "A .myCustomOperator. B" you write "A.myCustomOperator(B)"</p></pre>IntellectualReserve: <pre><p>You're essentially taking about introducing another way to call functions with arguments. </p>
<p>I understand the rationale for function invocation in the form of myStruct.myFunc()</p>
<p>What's the rationale for introducing custom operator function invocation?</p></pre>john_84: <pre><p>I just want to point out that zillions of stuff has been suggested , some so inconsequential they would hardly change the language ,and very few have been considered seriously by the go team. So the answer is no. Go designers kind of loathe C++ and Go was basically designed as the anti C++ , they will certainly not implement any feature that comes from C++ .</p>
<p>The DIY alternative is a DSL you either compile with Go generate or use inline with some reflection. But never expect anything will be added to the language, because it likely won't. That's something important to keep in my when using Go.</p></pre>
Is there any chance go will support custom ( not overloaded) operators in the future?
agolangf · · 697 次点击这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传