분석시각화 대회 코드 공유 게시물은
내용 확인 후
좋아요(투표) 가능합니다.
월간 데이콘 생체 광학 데이터 분석 AI 경진대회
1등 솔루션입니다
대회 주최와 대회 참여하신 분들께 감사합니다. 다들 정말 고생하셨습니다!
우리 솔류션에 다음과 같은 features 사용했습니다. rho, log(src), log(dst), src==0, dst.isnull(), wavelength 그리고 신경망을 11 레이어 GRU+attention으로 만들었습니다.
수고하셨습니다!~~
감사합니다
감사합니다^^~ 정명님도 고생하셨습니다~
혹시 소스코드 공개도 가능할까요? 저희도 LSTM Attention이나 1d Convolution을 하긴 했는데 성능이 좋지 못하였습니다.(0.89) // Is it possible to release the source code? We also did LSTM Attention or 1d Convolution, but the performance was not good.(0.89)
I try to answer in english first. Let me know if you don't speak english, I'll ask my teammate to translate in this case. Sure, we can share our solution, actually it's a single google colab notebook. But we need the approval of the organizers of the competition for this, don't we?
삭제된 댓글입니다
Perhaps the instructions will go to Mail.
I saw your Data science Bowl 2019 Solution. We also participated in the competition.
I'm sorry to hear about DS Bowl, yeah it was much harder fight too. It seems I deleted your message. Oooops, It wasn't by demand. I'm sorry again. Yeah, we will share code a little bit later as far as dacon'll confirm that it's ok.
I've shared our code here: https://dacon.io/competitions/official/235608/codeshare/1274?page=1&dtype=recent&ptype=pub
Thank you for sharing. I should study and make it with Pytorch version.
This is the link on pytorch version of the same model https://colab.research.google.com/drive/19GJRcBCkje0h7Sbtfx2ta2c2G_WJFKJ_?usp=sharing
I have no idea why, but the pytorch version above is slightly worse.
Wow!! Thank you so much!
Your score is so amazing..!
Thanks! It looks like a simple solution, but it wasn't an easy fight ;) I'm so happy it's all over now.
Thank you for taking time with DACON. Mr. SF !!
Thanks for the sharing. Can we tell how did you think of implementing attention on this dataset?
The brain is a some kind of translator from "src" to "dst" and some of NLP technics are very applicable there. So we've tried bidirectional RNN with self-attention because even 35 elements sequence is long enough for RNN and it can "forget" something important. Is it what you asked me for?
Exactly. I've just tried vanila NN approach, and gave up improving that. So, wondered how GM tackled it.
Appreicate for the detailed explanation. Congrats for the 1st place!
데이콘(주) | 대표 김국진 | 699-81-01021
통신판매업 신고번호: 제 2021-서울영등포-1704호
서울특별시 영등포구 은행로 3 익스콘벤처타워 901호
이메일 dacon@dacon.io | 전화번호: 070-4102-0545
Copyright ⓒ DACON Inc. All rights reserved
와 ... 저는 신경망으로 아무리 구성을 해보아도 모델이 훈련을 잘 하지 못하는 모습이었는데 코드가 정말 기대되네요 !!
고생하셨습니다 ! 1등 축하드립니다.